00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2033 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3293 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.138 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.138 The recommended git tool is: git 00:00:00.139 using credential 00000000-0000-0000-0000-000000000002 00:00:00.141 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.183 Fetching changes from the remote Git repository 00:00:00.185 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.222 Using shallow fetch with depth 1 00:00:00.222 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.222 > git --version # timeout=10 00:00:00.262 > git --version # 'git version 2.39.2' 00:00:00.262 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.284 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.284 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.552 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.562 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.572 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:07.572 > git config core.sparsecheckout # timeout=10 00:00:07.583 > git read-tree -mu HEAD # timeout=10 00:00:07.599 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:07.642 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:07.642 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:07.754 [Pipeline] Start of Pipeline 00:00:07.770 [Pipeline] library 00:00:07.771 Loading library shm_lib@master 00:00:07.771 Library shm_lib@master is cached. Copying from home. 00:00:07.783 [Pipeline] node 00:00:07.792 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:07.794 [Pipeline] { 00:00:07.802 [Pipeline] catchError 00:00:07.803 [Pipeline] { 00:00:07.814 [Pipeline] wrap 00:00:07.823 [Pipeline] { 00:00:07.828 [Pipeline] stage 00:00:07.829 [Pipeline] { (Prologue) 00:00:07.845 [Pipeline] echo 00:00:07.846 Node: VM-host-SM9 00:00:07.852 [Pipeline] cleanWs 00:00:07.860 [WS-CLEANUP] Deleting project workspace... 00:00:07.860 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.865 [WS-CLEANUP] done 00:00:08.035 [Pipeline] setCustomBuildProperty 00:00:08.112 [Pipeline] httpRequest 00:00:08.130 [Pipeline] echo 00:00:08.131 Sorcerer 10.211.164.101 is alive 00:00:08.137 [Pipeline] httpRequest 00:00:08.140 HttpMethod: GET 00:00:08.141 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.141 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.161 Response Code: HTTP/1.1 200 OK 00:00:08.162 Success: Status code 200 is in the accepted range: 200,404 00:00:08.162 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:26.958 [Pipeline] sh 00:00:27.240 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:27.259 [Pipeline] httpRequest 00:00:27.289 [Pipeline] echo 00:00:27.291 Sorcerer 10.211.164.101 is alive 00:00:27.300 [Pipeline] httpRequest 00:00:27.304 HttpMethod: GET 00:00:27.305 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:27.305 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:27.318 Response Code: HTTP/1.1 200 OK 00:00:27.319 Success: Status code 200 is in the accepted range: 200,404 00:00:27.319 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:10.700 [Pipeline] sh 00:01:10.981 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:14.280 [Pipeline] sh 00:01:14.558 + git -C spdk log --oneline -n5 00:01:14.558 dbef7efac test: fix dpdk builds on ubuntu24 00:01:14.558 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:14.558 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:14.558 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:14.558 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:14.578 [Pipeline] writeFile 00:01:14.594 [Pipeline] sh 00:01:14.873 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:14.884 [Pipeline] sh 00:01:15.163 + cat autorun-spdk.conf 00:01:15.164 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.164 SPDK_TEST_NVME=1 00:01:15.164 SPDK_TEST_FTL=1 00:01:15.164 SPDK_TEST_ISAL=1 00:01:15.164 SPDK_RUN_ASAN=1 00:01:15.164 SPDK_RUN_UBSAN=1 00:01:15.164 SPDK_TEST_XNVME=1 00:01:15.164 SPDK_TEST_NVME_FDP=1 00:01:15.164 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:15.170 RUN_NIGHTLY=1 00:01:15.172 [Pipeline] } 00:01:15.186 [Pipeline] // stage 00:01:15.197 [Pipeline] stage 00:01:15.199 [Pipeline] { (Run VM) 00:01:15.212 [Pipeline] sh 00:01:15.508 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:15.508 + echo 'Start stage prepare_nvme.sh' 00:01:15.508 Start stage prepare_nvme.sh 00:01:15.508 + [[ -n 2 ]] 00:01:15.508 + disk_prefix=ex2 00:01:15.508 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:01:15.508 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:01:15.508 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:01:15.508 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.508 ++ SPDK_TEST_NVME=1 00:01:15.508 ++ SPDK_TEST_FTL=1 00:01:15.508 ++ SPDK_TEST_ISAL=1 00:01:15.508 ++ SPDK_RUN_ASAN=1 00:01:15.508 ++ SPDK_RUN_UBSAN=1 00:01:15.508 ++ SPDK_TEST_XNVME=1 00:01:15.508 ++ SPDK_TEST_NVME_FDP=1 00:01:15.508 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:15.508 ++ RUN_NIGHTLY=1 00:01:15.508 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:01:15.508 + nvme_files=() 00:01:15.508 + declare -A nvme_files 00:01:15.508 + backend_dir=/var/lib/libvirt/images/backends 00:01:15.508 + nvme_files['nvme.img']=5G 00:01:15.508 + nvme_files['nvme-cmb.img']=5G 00:01:15.508 + nvme_files['nvme-multi0.img']=4G 00:01:15.508 + nvme_files['nvme-multi1.img']=4G 00:01:15.508 + nvme_files['nvme-multi2.img']=4G 00:01:15.508 + nvme_files['nvme-openstack.img']=8G 00:01:15.508 + nvme_files['nvme-zns.img']=5G 00:01:15.508 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:15.508 + (( SPDK_TEST_FTL == 1 )) 00:01:15.508 + nvme_files["nvme-ftl.img"]=6G 00:01:15.508 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:15.508 + nvme_files["nvme-fdp.img"]=1G 00:01:15.508 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:15.508 + for nvme in "${!nvme_files[@]}" 00:01:15.508 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:15.508 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.508 + for nvme in "${!nvme_files[@]}" 00:01:15.508 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:01:15.508 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:15.508 + for nvme in "${!nvme_files[@]}" 00:01:15.508 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:15.508 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.508 + for nvme in "${!nvme_files[@]}" 00:01:15.508 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:15.508 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:15.508 + for nvme in "${!nvme_files[@]}" 00:01:15.508 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:15.508 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.508 + for nvme in "${!nvme_files[@]}" 00:01:15.508 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:15.508 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.508 + for nvme in "${!nvme_files[@]}" 00:01:15.508 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:15.508 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.508 + for nvme in "${!nvme_files[@]}" 00:01:15.508 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:01:15.767 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:15.767 + for nvme in "${!nvme_files[@]}" 00:01:15.767 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:15.767 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.767 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:15.767 + echo 'End stage prepare_nvme.sh' 00:01:15.767 End stage prepare_nvme.sh 00:01:15.779 [Pipeline] sh 00:01:16.056 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:16.056 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:16.056 00:01:16.056 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:01:16.056 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:01:16.056 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:01:16.056 HELP=0 00:01:16.056 DRY_RUN=0 00:01:16.056 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:01:16.056 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:16.056 NVME_AUTO_CREATE=0 00:01:16.056 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:01:16.056 NVME_CMB=,,,, 00:01:16.056 NVME_PMR=,,,, 00:01:16.056 NVME_ZNS=,,,, 00:01:16.056 NVME_MS=true,,,, 00:01:16.056 NVME_FDP=,,,on, 00:01:16.056 SPDK_VAGRANT_DISTRO=fedora38 00:01:16.056 SPDK_VAGRANT_VMCPU=10 00:01:16.056 SPDK_VAGRANT_VMRAM=12288 00:01:16.056 SPDK_VAGRANT_PROVIDER=libvirt 00:01:16.056 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:16.056 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:16.056 SPDK_OPENSTACK_NETWORK=0 00:01:16.056 VAGRANT_PACKAGE_BOX=0 00:01:16.056 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:16.056 FORCE_DISTRO=true 00:01:16.056 VAGRANT_BOX_VERSION= 00:01:16.056 EXTRA_VAGRANTFILES= 00:01:16.056 NIC_MODEL=e1000 00:01:16.056 00:01:16.056 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt' 00:01:16.056 /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:01:19.340 Bringing machine 'default' up with 'libvirt' provider... 00:01:19.908 ==> default: Creating image (snapshot of base box volume). 00:01:19.908 ==> default: Creating domain with the following settings... 00:01:19.908 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721834801_f6404168124489f5d6d5 00:01:19.908 ==> default: -- Domain type: kvm 00:01:19.908 ==> default: -- Cpus: 10 00:01:19.908 ==> default: -- Feature: acpi 00:01:19.908 ==> default: -- Feature: apic 00:01:19.908 ==> default: -- Feature: pae 00:01:19.908 ==> default: -- Memory: 12288M 00:01:19.908 ==> default: -- Memory Backing: hugepages: 00:01:19.908 ==> default: -- Management MAC: 00:01:19.908 ==> default: -- Loader: 00:01:19.908 ==> default: -- Nvram: 00:01:19.908 ==> default: -- Base box: spdk/fedora38 00:01:19.908 ==> default: -- Storage pool: default 00:01:19.908 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721834801_f6404168124489f5d6d5.img (20G) 00:01:19.908 ==> default: -- Volume Cache: default 00:01:19.908 ==> default: -- Kernel: 00:01:19.908 ==> default: -- Initrd: 00:01:19.908 ==> default: -- Graphics Type: vnc 00:01:19.908 ==> default: -- Graphics Port: -1 00:01:19.908 ==> default: -- Graphics IP: 127.0.0.1 00:01:19.908 ==> default: -- Graphics Password: Not defined 00:01:19.908 ==> default: -- Video Type: cirrus 00:01:19.908 ==> default: -- Video VRAM: 9216 00:01:19.908 ==> default: -- Sound Type: 00:01:19.908 ==> default: -- Keymap: en-us 00:01:19.908 ==> default: -- TPM Path: 00:01:19.908 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:19.908 ==> default: -- Command line args: 00:01:19.908 ==> default: -> value=-device, 00:01:19.908 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:19.908 ==> default: -> value=-drive, 00:01:19.908 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:19.908 ==> default: -> value=-device, 00:01:19.908 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:19.908 ==> default: -> value=-device, 00:01:19.908 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:19.908 ==> default: -> value=-drive, 00:01:19.908 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:01:19.908 ==> default: -> value=-device, 00:01:19.908 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.908 ==> default: -> value=-device, 00:01:19.908 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:19.908 ==> default: -> value=-drive, 00:01:19.908 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:19.908 ==> default: -> value=-device, 00:01:19.908 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.908 ==> default: -> value=-drive, 00:01:19.908 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:19.908 ==> default: -> value=-device, 00:01:19.908 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.908 ==> default: -> value=-drive, 00:01:19.908 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:19.908 ==> default: -> value=-device, 00:01:19.908 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.908 ==> default: -> value=-device, 00:01:19.908 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:19.908 ==> default: -> value=-device, 00:01:19.908 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:19.908 ==> default: -> value=-drive, 00:01:19.908 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:19.908 ==> default: -> value=-device, 00:01:19.908 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.908 ==> default: Creating shared folders metadata... 00:01:20.168 ==> default: Starting domain. 00:01:21.573 ==> default: Waiting for domain to get an IP address... 00:01:36.546 ==> default: Waiting for SSH to become available... 00:01:37.921 ==> default: Configuring and enabling network interfaces... 00:01:42.111 default: SSH address: 192.168.121.171:22 00:01:42.111 default: SSH username: vagrant 00:01:42.111 default: SSH auth method: private key 00:01:44.012 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:52.125 ==> default: Mounting SSHFS shared folder... 00:01:53.061 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:53.061 ==> default: Checking Mount.. 00:01:54.114 ==> default: Folder Successfully Mounted! 00:01:54.114 ==> default: Running provisioner: file... 00:01:55.051 default: ~/.gitconfig => .gitconfig 00:01:55.309 00:01:55.309 SUCCESS! 00:01:55.309 00:01:55.309 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:01:55.309 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:55.309 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:01:55.309 00:01:55.319 [Pipeline] } 00:01:55.336 [Pipeline] // stage 00:01:55.346 [Pipeline] dir 00:01:55.347 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt 00:01:55.349 [Pipeline] { 00:01:55.363 [Pipeline] catchError 00:01:55.365 [Pipeline] { 00:01:55.379 [Pipeline] sh 00:01:55.657 + vagrant ssh-config --host vagrant 00:01:55.658 + sed -ne /^Host/,$p 00:01:55.658 + tee ssh_conf 00:01:59.849 Host vagrant 00:01:59.849 HostName 192.168.121.171 00:01:59.849 User vagrant 00:01:59.849 Port 22 00:01:59.849 UserKnownHostsFile /dev/null 00:01:59.849 StrictHostKeyChecking no 00:01:59.849 PasswordAuthentication no 00:01:59.849 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:59.849 IdentitiesOnly yes 00:01:59.849 LogLevel FATAL 00:01:59.849 ForwardAgent yes 00:01:59.849 ForwardX11 yes 00:01:59.849 00:01:59.864 [Pipeline] withEnv 00:01:59.867 [Pipeline] { 00:01:59.883 [Pipeline] sh 00:02:00.165 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:00.165 source /etc/os-release 00:02:00.165 [[ -e /image.version ]] && img=$(< /image.version) 00:02:00.165 # Minimal, systemd-like check. 00:02:00.165 if [[ -e /.dockerenv ]]; then 00:02:00.165 # Clear garbage from the node's name: 00:02:00.165 # agt-er_autotest_547-896 -> autotest_547-896 00:02:00.165 # $HOSTNAME is the actual container id 00:02:00.165 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:00.165 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:00.165 # We can assume this is a mount from a host where container is running, 00:02:00.165 # so fetch its hostname to easily identify the target swarm worker. 00:02:00.165 container="$(< /etc/hostname) ($agent)" 00:02:00.165 else 00:02:00.165 # Fallback 00:02:00.165 container=$agent 00:02:00.165 fi 00:02:00.165 fi 00:02:00.165 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:00.165 00:02:00.177 [Pipeline] } 00:02:00.197 [Pipeline] // withEnv 00:02:00.206 [Pipeline] setCustomBuildProperty 00:02:00.221 [Pipeline] stage 00:02:00.223 [Pipeline] { (Tests) 00:02:00.242 [Pipeline] sh 00:02:00.522 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:00.794 [Pipeline] sh 00:02:01.074 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:01.347 [Pipeline] timeout 00:02:01.347 Timeout set to expire in 40 min 00:02:01.349 [Pipeline] { 00:02:01.365 [Pipeline] sh 00:02:01.678 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:02.247 HEAD is now at dbef7efac test: fix dpdk builds on ubuntu24 00:02:02.261 [Pipeline] sh 00:02:02.540 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:02.812 [Pipeline] sh 00:02:03.092 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:03.367 [Pipeline] sh 00:02:03.647 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:03.906 ++ readlink -f spdk_repo 00:02:03.906 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:03.906 + [[ -n /home/vagrant/spdk_repo ]] 00:02:03.906 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:03.906 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:03.906 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:03.906 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:03.906 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:03.906 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:03.906 + cd /home/vagrant/spdk_repo 00:02:03.906 + source /etc/os-release 00:02:03.906 ++ NAME='Fedora Linux' 00:02:03.906 ++ VERSION='38 (Cloud Edition)' 00:02:03.906 ++ ID=fedora 00:02:03.906 ++ VERSION_ID=38 00:02:03.906 ++ VERSION_CODENAME= 00:02:03.906 ++ PLATFORM_ID=platform:f38 00:02:03.906 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:03.906 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:03.906 ++ LOGO=fedora-logo-icon 00:02:03.906 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:03.906 ++ HOME_URL=https://fedoraproject.org/ 00:02:03.906 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:03.906 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:03.906 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:03.906 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:03.906 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:03.906 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:03.906 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:03.906 ++ SUPPORT_END=2024-05-14 00:02:03.906 ++ VARIANT='Cloud Edition' 00:02:03.906 ++ VARIANT_ID=cloud 00:02:03.906 + uname -a 00:02:03.906 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:03.906 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:03.906 Hugepages 00:02:03.906 node hugesize free / total 00:02:03.906 node0 1048576kB 0 / 0 00:02:03.906 node0 2048kB 0 / 0 00:02:03.906 00:02:03.906 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:03.906 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:03.906 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:03.906 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:04.165 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:04.165 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:04.165 + rm -f /tmp/spdk-ld-path 00:02:04.165 + source autorun-spdk.conf 00:02:04.165 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.165 ++ SPDK_TEST_NVME=1 00:02:04.165 ++ SPDK_TEST_FTL=1 00:02:04.165 ++ SPDK_TEST_ISAL=1 00:02:04.165 ++ SPDK_RUN_ASAN=1 00:02:04.165 ++ SPDK_RUN_UBSAN=1 00:02:04.165 ++ SPDK_TEST_XNVME=1 00:02:04.165 ++ SPDK_TEST_NVME_FDP=1 00:02:04.165 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:04.165 ++ RUN_NIGHTLY=1 00:02:04.165 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:04.165 + [[ -n '' ]] 00:02:04.165 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:04.165 + for M in /var/spdk/build-*-manifest.txt 00:02:04.165 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:04.165 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:04.165 + for M in /var/spdk/build-*-manifest.txt 00:02:04.165 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:04.165 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:04.165 ++ uname 00:02:04.165 + [[ Linux == \L\i\n\u\x ]] 00:02:04.165 + sudo dmesg -T 00:02:04.165 + sudo dmesg --clear 00:02:04.165 + dmesg_pid=5150 00:02:04.165 + [[ Fedora Linux == FreeBSD ]] 00:02:04.165 + sudo dmesg -Tw 00:02:04.165 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:04.165 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:04.165 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:04.165 + [[ -x /usr/src/fio-static/fio ]] 00:02:04.165 + export FIO_BIN=/usr/src/fio-static/fio 00:02:04.165 + FIO_BIN=/usr/src/fio-static/fio 00:02:04.165 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:04.165 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:04.165 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:04.165 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:04.165 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:04.165 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:04.165 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:04.165 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:04.165 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:04.165 Test configuration: 00:02:04.165 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.165 SPDK_TEST_NVME=1 00:02:04.165 SPDK_TEST_FTL=1 00:02:04.165 SPDK_TEST_ISAL=1 00:02:04.165 SPDK_RUN_ASAN=1 00:02:04.165 SPDK_RUN_UBSAN=1 00:02:04.165 SPDK_TEST_XNVME=1 00:02:04.165 SPDK_TEST_NVME_FDP=1 00:02:04.165 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:04.165 RUN_NIGHTLY=1 15:27:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:04.165 15:27:25 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:04.165 15:27:25 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:04.166 15:27:25 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:04.166 15:27:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.166 15:27:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.166 15:27:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.166 15:27:25 -- paths/export.sh@5 -- $ export PATH 00:02:04.166 15:27:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.166 15:27:25 -- common/autobuild_common.sh@437 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:04.166 15:27:25 -- common/autobuild_common.sh@438 -- $ date +%s 00:02:04.166 15:27:25 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721834845.XXXXXX 00:02:04.166 15:27:25 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721834845.FKB9Ue 00:02:04.166 15:27:25 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:02:04.166 15:27:25 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:02:04.166 15:27:25 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:04.166 15:27:25 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:04.166 15:27:25 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:04.166 15:27:25 -- common/autobuild_common.sh@454 -- $ get_config_params 00:02:04.166 15:27:25 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:04.166 15:27:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.424 15:27:25 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:04.424 15:27:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:04.424 15:27:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:04.424 15:27:25 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:04.424 15:27:25 -- spdk/autobuild.sh@16 -- $ date -u 00:02:04.424 Wed Jul 24 03:27:25 PM UTC 2024 00:02:04.424 15:27:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:04.424 LTS-60-gdbef7efac 00:02:04.424 15:27:25 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:04.424 15:27:25 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:04.424 15:27:25 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:04.424 15:27:25 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:04.424 15:27:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.424 ************************************ 00:02:04.424 START TEST asan 00:02:04.424 ************************************ 00:02:04.424 using asan 00:02:04.424 15:27:25 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:02:04.424 00:02:04.424 real 0m0.000s 00:02:04.424 user 0m0.000s 00:02:04.424 sys 0m0.000s 00:02:04.424 15:27:25 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:04.424 15:27:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.424 ************************************ 00:02:04.424 END TEST asan 00:02:04.424 ************************************ 00:02:04.424 15:27:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:04.424 15:27:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:04.424 15:27:25 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:04.424 15:27:25 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:04.424 15:27:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.424 ************************************ 00:02:04.424 START TEST ubsan 00:02:04.424 ************************************ 00:02:04.424 using ubsan 00:02:04.424 15:27:25 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:04.424 00:02:04.424 real 0m0.000s 00:02:04.424 user 0m0.000s 00:02:04.424 sys 0m0.000s 00:02:04.424 15:27:25 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:04.424 ************************************ 00:02:04.424 END TEST ubsan 00:02:04.424 15:27:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.424 ************************************ 00:02:04.424 15:27:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:04.424 15:27:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:04.424 15:27:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:04.424 15:27:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:04.424 15:27:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:04.424 15:27:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:04.424 15:27:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:04.424 15:27:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:04.424 15:27:25 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:04.424 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:04.424 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:04.991 Using 'verbs' RDMA provider 00:02:18.165 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:30.406 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:30.406 Creating mk/config.mk...done. 00:02:30.406 Creating mk/cc.flags.mk...done. 00:02:30.406 Type 'make' to build. 00:02:30.406 15:27:51 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:30.406 15:27:51 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:30.406 15:27:51 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:30.406 15:27:51 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.406 ************************************ 00:02:30.406 START TEST make 00:02:30.406 ************************************ 00:02:30.406 15:27:51 -- common/autotest_common.sh@1104 -- $ make -j10 00:02:30.667 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:30.667 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:30.667 meson setup builddir \ 00:02:30.667 -Dwith-libaio=enabled \ 00:02:30.667 -Dwith-liburing=enabled \ 00:02:30.667 -Dwith-libvfn=disabled \ 00:02:30.667 -Dwith-spdk=false && \ 00:02:30.667 meson compile -C builddir && \ 00:02:30.667 cd -) 00:02:30.667 make[1]: Nothing to be done for 'all'. 00:02:33.953 The Meson build system 00:02:33.953 Version: 1.3.1 00:02:33.953 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:33.953 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:33.953 Build type: native build 00:02:33.953 Project name: xnvme 00:02:33.953 Project version: 0.7.3 00:02:33.953 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:33.953 C linker for the host machine: cc ld.bfd 2.39-16 00:02:33.953 Host machine cpu family: x86_64 00:02:33.953 Host machine cpu: x86_64 00:02:33.953 Message: host_machine.system: linux 00:02:33.953 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:33.953 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:33.953 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:33.953 Run-time dependency threads found: YES 00:02:33.953 Has header "setupapi.h" : NO 00:02:33.953 Has header "linux/blkzoned.h" : YES 00:02:33.953 Has header "linux/blkzoned.h" : YES (cached) 00:02:33.953 Has header "libaio.h" : YES 00:02:33.953 Library aio found: YES 00:02:33.953 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:33.953 Run-time dependency liburing found: YES 2.2 00:02:33.953 Dependency libvfn skipped: feature with-libvfn disabled 00:02:33.953 Run-time dependency appleframeworks found: NO (tried framework) 00:02:33.953 Run-time dependency appleframeworks found: NO (tried framework) 00:02:33.953 Configuring xnvme_config.h using configuration 00:02:33.953 Configuring xnvme.spec using configuration 00:02:33.953 Run-time dependency bash-completion found: YES 2.11 00:02:33.953 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:33.953 Program cp found: YES (/usr/bin/cp) 00:02:33.953 Has header "winsock2.h" : NO 00:02:33.953 Has header "dbghelp.h" : NO 00:02:33.953 Library rpcrt4 found: NO 00:02:33.953 Library rt found: YES 00:02:33.953 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:33.953 Found CMake: /usr/bin/cmake (3.27.7) 00:02:33.953 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:33.953 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:33.954 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:33.954 Build targets in project: 32 00:02:33.954 00:02:33.954 xnvme 0.7.3 00:02:33.954 00:02:33.954 User defined options 00:02:33.954 with-libaio : enabled 00:02:33.954 with-liburing: enabled 00:02:33.954 with-libvfn : disabled 00:02:33.954 with-spdk : false 00:02:33.954 00:02:33.954 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.521 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:34.521 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:34.779 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:34.779 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:34.779 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:34.779 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:34.779 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:34.779 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:34.779 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:34.779 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:34.779 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:34.779 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:34.779 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:34.779 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:35.037 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:35.037 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:35.037 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:35.037 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:35.037 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:35.037 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:35.037 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:35.037 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:35.037 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:35.037 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:35.037 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:35.037 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:35.037 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:35.037 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:35.037 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:35.037 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:35.037 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:35.037 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:35.037 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:35.296 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:35.296 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:35.296 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:35.296 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:35.296 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:35.296 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:35.296 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:35.296 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:35.296 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:35.296 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:35.296 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:35.296 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:35.296 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:35.296 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:35.296 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:35.296 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:35.296 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:35.296 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:35.296 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:35.296 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:35.296 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:35.296 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:35.554 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:35.554 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:35.554 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:35.554 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:35.554 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:35.554 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:35.554 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:35.554 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:35.554 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:35.554 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:35.554 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:35.812 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:35.812 [67/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:35.812 [68/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:35.812 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:35.812 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:35.812 [71/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:35.812 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:35.812 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:35.812 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:35.812 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:35.812 [76/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:35.812 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:35.812 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:35.812 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:36.071 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:36.071 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:36.071 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:36.071 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:36.071 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:36.071 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:36.071 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:36.071 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:36.071 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:36.071 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:36.330 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:36.330 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:36.330 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:36.330 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:36.330 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:36.330 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:36.330 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:36.330 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:36.330 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:36.330 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:36.330 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:36.330 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:36.330 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:36.330 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:36.330 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:36.330 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:36.330 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:36.330 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:36.330 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:36.330 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:36.330 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:36.588 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:36.588 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:36.588 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:36.588 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:36.588 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:36.588 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:36.588 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:36.588 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:36.588 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:36.588 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:36.588 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:36.588 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:36.588 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:36.588 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:36.588 [125/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:36.588 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:36.588 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:36.847 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:36.847 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:36.847 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:36.847 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:36.847 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:36.847 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:36.847 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:36.847 [135/203] Linking target lib/libxnvme.so 00:02:36.847 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:36.847 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:36.847 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:36.847 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:36.847 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:37.104 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:37.104 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:37.104 [143/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:37.104 [144/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:37.104 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:37.104 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:37.104 [147/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:37.362 [148/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:37.362 [149/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:37.362 [150/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:37.362 [151/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:37.362 [152/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:37.362 [153/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:37.362 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:37.362 [155/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:37.362 [156/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:37.620 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:37.620 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:37.620 [159/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:37.620 [160/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:37.620 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:37.620 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:37.620 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:37.620 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:37.620 [165/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:37.620 [166/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:37.620 [167/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:37.878 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:37.878 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:37.878 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:37.878 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:38.136 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:38.136 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:38.136 [174/203] Linking static target lib/libxnvme.a 00:02:38.136 [175/203] Linking target tests/xnvme_tests_cli 00:02:38.136 [176/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:38.136 [177/203] Linking target tests/xnvme_tests_async_intf 00:02:38.394 [178/203] Linking target tests/xnvme_tests_lblk 00:02:38.394 [179/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:38.394 [180/203] Linking target tests/xnvme_tests_xnvme_file 00:02:38.394 [181/203] Linking target tests/xnvme_tests_ioworker 00:02:38.394 [182/203] Linking target tests/xnvme_tests_enum 00:02:38.394 [183/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:38.394 [184/203] Linking target tests/xnvme_tests_buf 00:02:38.394 [185/203] Linking target tests/xnvme_tests_scc 00:02:38.394 [186/203] Linking target tests/xnvme_tests_znd_append 00:02:38.394 [187/203] Linking target tests/xnvme_tests_znd_state 00:02:38.394 [188/203] Linking target tools/lblk 00:02:38.394 [189/203] Linking target tests/xnvme_tests_kvs 00:02:38.394 [190/203] Linking target tools/xdd 00:02:38.394 [191/203] Linking target tests/xnvme_tests_map 00:02:38.394 [192/203] Linking target tools/xnvme_file 00:02:38.394 [193/203] Linking target examples/xnvme_dev 00:02:38.394 [194/203] Linking target tools/zoned 00:02:38.394 [195/203] Linking target tools/kvs 00:02:38.394 [196/203] Linking target tools/xnvme 00:02:38.394 [197/203] Linking target examples/xnvme_enum 00:02:38.394 [198/203] Linking target examples/xnvme_single_async 00:02:38.394 [199/203] Linking target examples/xnvme_hello 00:02:38.394 [200/203] Linking target examples/xnvme_io_async 00:02:38.394 [201/203] Linking target examples/zoned_io_async 00:02:38.394 [202/203] Linking target examples/xnvme_single_sync 00:02:38.394 [203/203] Linking target examples/zoned_io_sync 00:02:38.394 INFO: autodetecting backend as ninja 00:02:38.394 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:38.652 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:48.615 The Meson build system 00:02:48.615 Version: 1.3.1 00:02:48.615 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:48.615 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:48.615 Build type: native build 00:02:48.615 Program cat found: YES (/usr/bin/cat) 00:02:48.615 Project name: DPDK 00:02:48.615 Project version: 23.11.0 00:02:48.615 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:48.615 C linker for the host machine: cc ld.bfd 2.39-16 00:02:48.615 Host machine cpu family: x86_64 00:02:48.615 Host machine cpu: x86_64 00:02:48.615 Message: ## Building in Developer Mode ## 00:02:48.615 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:48.615 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:48.615 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:48.615 Program python3 found: YES (/usr/bin/python3) 00:02:48.615 Program cat found: YES (/usr/bin/cat) 00:02:48.615 Compiler for C supports arguments -march=native: YES 00:02:48.615 Checking for size of "void *" : 8 00:02:48.615 Checking for size of "void *" : 8 (cached) 00:02:48.615 Library m found: YES 00:02:48.615 Library numa found: YES 00:02:48.615 Has header "numaif.h" : YES 00:02:48.615 Library fdt found: NO 00:02:48.615 Library execinfo found: NO 00:02:48.615 Has header "execinfo.h" : YES 00:02:48.615 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:48.615 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:48.615 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:48.615 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:48.615 Run-time dependency openssl found: YES 3.0.9 00:02:48.615 Run-time dependency libpcap found: YES 1.10.4 00:02:48.615 Has header "pcap.h" with dependency libpcap: YES 00:02:48.615 Compiler for C supports arguments -Wcast-qual: YES 00:02:48.615 Compiler for C supports arguments -Wdeprecated: YES 00:02:48.615 Compiler for C supports arguments -Wformat: YES 00:02:48.615 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:48.615 Compiler for C supports arguments -Wformat-security: NO 00:02:48.615 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:48.615 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:48.615 Compiler for C supports arguments -Wnested-externs: YES 00:02:48.615 Compiler for C supports arguments -Wold-style-definition: YES 00:02:48.615 Compiler for C supports arguments -Wpointer-arith: YES 00:02:48.615 Compiler for C supports arguments -Wsign-compare: YES 00:02:48.615 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:48.615 Compiler for C supports arguments -Wundef: YES 00:02:48.615 Compiler for C supports arguments -Wwrite-strings: YES 00:02:48.615 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:48.615 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:48.615 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:48.615 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:48.615 Program objdump found: YES (/usr/bin/objdump) 00:02:48.615 Compiler for C supports arguments -mavx512f: YES 00:02:48.615 Checking if "AVX512 checking" compiles: YES 00:02:48.615 Fetching value of define "__SSE4_2__" : 1 00:02:48.615 Fetching value of define "__AES__" : 1 00:02:48.615 Fetching value of define "__AVX__" : 1 00:02:48.615 Fetching value of define "__AVX2__" : 1 00:02:48.615 Fetching value of define "__AVX512BW__" : (undefined) 00:02:48.615 Fetching value of define "__AVX512CD__" : (undefined) 00:02:48.615 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:48.615 Fetching value of define "__AVX512F__" : (undefined) 00:02:48.616 Fetching value of define "__AVX512VL__" : (undefined) 00:02:48.616 Fetching value of define "__PCLMUL__" : 1 00:02:48.616 Fetching value of define "__RDRND__" : 1 00:02:48.616 Fetching value of define "__RDSEED__" : 1 00:02:48.616 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:48.616 Fetching value of define "__znver1__" : (undefined) 00:02:48.616 Fetching value of define "__znver2__" : (undefined) 00:02:48.616 Fetching value of define "__znver3__" : (undefined) 00:02:48.616 Fetching value of define "__znver4__" : (undefined) 00:02:48.616 Library asan found: YES 00:02:48.616 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:48.616 Message: lib/log: Defining dependency "log" 00:02:48.616 Message: lib/kvargs: Defining dependency "kvargs" 00:02:48.616 Message: lib/telemetry: Defining dependency "telemetry" 00:02:48.616 Library rt found: YES 00:02:48.616 Checking for function "getentropy" : NO 00:02:48.616 Message: lib/eal: Defining dependency "eal" 00:02:48.616 Message: lib/ring: Defining dependency "ring" 00:02:48.616 Message: lib/rcu: Defining dependency "rcu" 00:02:48.616 Message: lib/mempool: Defining dependency "mempool" 00:02:48.616 Message: lib/mbuf: Defining dependency "mbuf" 00:02:48.616 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:48.616 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:48.616 Compiler for C supports arguments -mpclmul: YES 00:02:48.616 Compiler for C supports arguments -maes: YES 00:02:48.616 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:48.616 Compiler for C supports arguments -mavx512bw: YES 00:02:48.616 Compiler for C supports arguments -mavx512dq: YES 00:02:48.616 Compiler for C supports arguments -mavx512vl: YES 00:02:48.616 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:48.616 Compiler for C supports arguments -mavx2: YES 00:02:48.616 Compiler for C supports arguments -mavx: YES 00:02:48.616 Message: lib/net: Defining dependency "net" 00:02:48.616 Message: lib/meter: Defining dependency "meter" 00:02:48.616 Message: lib/ethdev: Defining dependency "ethdev" 00:02:48.616 Message: lib/pci: Defining dependency "pci" 00:02:48.616 Message: lib/cmdline: Defining dependency "cmdline" 00:02:48.616 Message: lib/hash: Defining dependency "hash" 00:02:48.616 Message: lib/timer: Defining dependency "timer" 00:02:48.616 Message: lib/compressdev: Defining dependency "compressdev" 00:02:48.616 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:48.616 Message: lib/dmadev: Defining dependency "dmadev" 00:02:48.616 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:48.616 Message: lib/power: Defining dependency "power" 00:02:48.616 Message: lib/reorder: Defining dependency "reorder" 00:02:48.616 Message: lib/security: Defining dependency "security" 00:02:48.616 Has header "linux/userfaultfd.h" : YES 00:02:48.616 Has header "linux/vduse.h" : YES 00:02:48.616 Message: lib/vhost: Defining dependency "vhost" 00:02:48.616 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:48.616 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:48.616 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:48.616 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:48.616 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:48.616 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:48.616 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:48.616 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:48.616 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:48.616 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:48.616 Program doxygen found: YES (/usr/bin/doxygen) 00:02:48.616 Configuring doxy-api-html.conf using configuration 00:02:48.616 Configuring doxy-api-man.conf using configuration 00:02:48.616 Program mandb found: YES (/usr/bin/mandb) 00:02:48.616 Program sphinx-build found: NO 00:02:48.616 Configuring rte_build_config.h using configuration 00:02:48.616 Message: 00:02:48.616 ================= 00:02:48.616 Applications Enabled 00:02:48.616 ================= 00:02:48.616 00:02:48.616 apps: 00:02:48.616 00:02:48.616 00:02:48.616 Message: 00:02:48.616 ================= 00:02:48.616 Libraries Enabled 00:02:48.616 ================= 00:02:48.616 00:02:48.616 libs: 00:02:48.616 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:48.616 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:48.616 cryptodev, dmadev, power, reorder, security, vhost, 00:02:48.616 00:02:48.616 Message: 00:02:48.616 =============== 00:02:48.616 Drivers Enabled 00:02:48.616 =============== 00:02:48.616 00:02:48.616 common: 00:02:48.616 00:02:48.616 bus: 00:02:48.616 pci, vdev, 00:02:48.616 mempool: 00:02:48.616 ring, 00:02:48.616 dma: 00:02:48.616 00:02:48.616 net: 00:02:48.616 00:02:48.616 crypto: 00:02:48.616 00:02:48.616 compress: 00:02:48.616 00:02:48.616 vdpa: 00:02:48.616 00:02:48.616 00:02:48.616 Message: 00:02:48.616 ================= 00:02:48.616 Content Skipped 00:02:48.616 ================= 00:02:48.616 00:02:48.616 apps: 00:02:48.616 dumpcap: explicitly disabled via build config 00:02:48.616 graph: explicitly disabled via build config 00:02:48.616 pdump: explicitly disabled via build config 00:02:48.616 proc-info: explicitly disabled via build config 00:02:48.616 test-acl: explicitly disabled via build config 00:02:48.616 test-bbdev: explicitly disabled via build config 00:02:48.616 test-cmdline: explicitly disabled via build config 00:02:48.616 test-compress-perf: explicitly disabled via build config 00:02:48.616 test-crypto-perf: explicitly disabled via build config 00:02:48.616 test-dma-perf: explicitly disabled via build config 00:02:48.616 test-eventdev: explicitly disabled via build config 00:02:48.616 test-fib: explicitly disabled via build config 00:02:48.616 test-flow-perf: explicitly disabled via build config 00:02:48.616 test-gpudev: explicitly disabled via build config 00:02:48.616 test-mldev: explicitly disabled via build config 00:02:48.616 test-pipeline: explicitly disabled via build config 00:02:48.616 test-pmd: explicitly disabled via build config 00:02:48.616 test-regex: explicitly disabled via build config 00:02:48.616 test-sad: explicitly disabled via build config 00:02:48.616 test-security-perf: explicitly disabled via build config 00:02:48.616 00:02:48.616 libs: 00:02:48.616 metrics: explicitly disabled via build config 00:02:48.616 acl: explicitly disabled via build config 00:02:48.616 bbdev: explicitly disabled via build config 00:02:48.616 bitratestats: explicitly disabled via build config 00:02:48.616 bpf: explicitly disabled via build config 00:02:48.616 cfgfile: explicitly disabled via build config 00:02:48.616 distributor: explicitly disabled via build config 00:02:48.616 efd: explicitly disabled via build config 00:02:48.616 eventdev: explicitly disabled via build config 00:02:48.616 dispatcher: explicitly disabled via build config 00:02:48.616 gpudev: explicitly disabled via build config 00:02:48.616 gro: explicitly disabled via build config 00:02:48.616 gso: explicitly disabled via build config 00:02:48.616 ip_frag: explicitly disabled via build config 00:02:48.616 jobstats: explicitly disabled via build config 00:02:48.616 latencystats: explicitly disabled via build config 00:02:48.616 lpm: explicitly disabled via build config 00:02:48.616 member: explicitly disabled via build config 00:02:48.616 pcapng: explicitly disabled via build config 00:02:48.616 rawdev: explicitly disabled via build config 00:02:48.616 regexdev: explicitly disabled via build config 00:02:48.616 mldev: explicitly disabled via build config 00:02:48.616 rib: explicitly disabled via build config 00:02:48.616 sched: explicitly disabled via build config 00:02:48.616 stack: explicitly disabled via build config 00:02:48.616 ipsec: explicitly disabled via build config 00:02:48.616 pdcp: explicitly disabled via build config 00:02:48.616 fib: explicitly disabled via build config 00:02:48.616 port: explicitly disabled via build config 00:02:48.616 pdump: explicitly disabled via build config 00:02:48.616 table: explicitly disabled via build config 00:02:48.616 pipeline: explicitly disabled via build config 00:02:48.616 graph: explicitly disabled via build config 00:02:48.616 node: explicitly disabled via build config 00:02:48.616 00:02:48.616 drivers: 00:02:48.616 common/cpt: not in enabled drivers build config 00:02:48.616 common/dpaax: not in enabled drivers build config 00:02:48.616 common/iavf: not in enabled drivers build config 00:02:48.616 common/idpf: not in enabled drivers build config 00:02:48.616 common/mvep: not in enabled drivers build config 00:02:48.616 common/octeontx: not in enabled drivers build config 00:02:48.616 bus/auxiliary: not in enabled drivers build config 00:02:48.616 bus/cdx: not in enabled drivers build config 00:02:48.616 bus/dpaa: not in enabled drivers build config 00:02:48.616 bus/fslmc: not in enabled drivers build config 00:02:48.616 bus/ifpga: not in enabled drivers build config 00:02:48.616 bus/platform: not in enabled drivers build config 00:02:48.616 bus/vmbus: not in enabled drivers build config 00:02:48.616 common/cnxk: not in enabled drivers build config 00:02:48.616 common/mlx5: not in enabled drivers build config 00:02:48.616 common/nfp: not in enabled drivers build config 00:02:48.616 common/qat: not in enabled drivers build config 00:02:48.616 common/sfc_efx: not in enabled drivers build config 00:02:48.616 mempool/bucket: not in enabled drivers build config 00:02:48.616 mempool/cnxk: not in enabled drivers build config 00:02:48.616 mempool/dpaa: not in enabled drivers build config 00:02:48.616 mempool/dpaa2: not in enabled drivers build config 00:02:48.616 mempool/octeontx: not in enabled drivers build config 00:02:48.616 mempool/stack: not in enabled drivers build config 00:02:48.616 dma/cnxk: not in enabled drivers build config 00:02:48.616 dma/dpaa: not in enabled drivers build config 00:02:48.617 dma/dpaa2: not in enabled drivers build config 00:02:48.617 dma/hisilicon: not in enabled drivers build config 00:02:48.617 dma/idxd: not in enabled drivers build config 00:02:48.617 dma/ioat: not in enabled drivers build config 00:02:48.617 dma/skeleton: not in enabled drivers build config 00:02:48.617 net/af_packet: not in enabled drivers build config 00:02:48.617 net/af_xdp: not in enabled drivers build config 00:02:48.617 net/ark: not in enabled drivers build config 00:02:48.617 net/atlantic: not in enabled drivers build config 00:02:48.617 net/avp: not in enabled drivers build config 00:02:48.617 net/axgbe: not in enabled drivers build config 00:02:48.617 net/bnx2x: not in enabled drivers build config 00:02:48.617 net/bnxt: not in enabled drivers build config 00:02:48.617 net/bonding: not in enabled drivers build config 00:02:48.617 net/cnxk: not in enabled drivers build config 00:02:48.617 net/cpfl: not in enabled drivers build config 00:02:48.617 net/cxgbe: not in enabled drivers build config 00:02:48.617 net/dpaa: not in enabled drivers build config 00:02:48.617 net/dpaa2: not in enabled drivers build config 00:02:48.617 net/e1000: not in enabled drivers build config 00:02:48.617 net/ena: not in enabled drivers build config 00:02:48.617 net/enetc: not in enabled drivers build config 00:02:48.617 net/enetfec: not in enabled drivers build config 00:02:48.617 net/enic: not in enabled drivers build config 00:02:48.617 net/failsafe: not in enabled drivers build config 00:02:48.617 net/fm10k: not in enabled drivers build config 00:02:48.617 net/gve: not in enabled drivers build config 00:02:48.617 net/hinic: not in enabled drivers build config 00:02:48.617 net/hns3: not in enabled drivers build config 00:02:48.617 net/i40e: not in enabled drivers build config 00:02:48.617 net/iavf: not in enabled drivers build config 00:02:48.617 net/ice: not in enabled drivers build config 00:02:48.617 net/idpf: not in enabled drivers build config 00:02:48.617 net/igc: not in enabled drivers build config 00:02:48.617 net/ionic: not in enabled drivers build config 00:02:48.617 net/ipn3ke: not in enabled drivers build config 00:02:48.617 net/ixgbe: not in enabled drivers build config 00:02:48.617 net/mana: not in enabled drivers build config 00:02:48.617 net/memif: not in enabled drivers build config 00:02:48.617 net/mlx4: not in enabled drivers build config 00:02:48.617 net/mlx5: not in enabled drivers build config 00:02:48.617 net/mvneta: not in enabled drivers build config 00:02:48.617 net/mvpp2: not in enabled drivers build config 00:02:48.617 net/netvsc: not in enabled drivers build config 00:02:48.617 net/nfb: not in enabled drivers build config 00:02:48.617 net/nfp: not in enabled drivers build config 00:02:48.617 net/ngbe: not in enabled drivers build config 00:02:48.617 net/null: not in enabled drivers build config 00:02:48.617 net/octeontx: not in enabled drivers build config 00:02:48.617 net/octeon_ep: not in enabled drivers build config 00:02:48.617 net/pcap: not in enabled drivers build config 00:02:48.617 net/pfe: not in enabled drivers build config 00:02:48.617 net/qede: not in enabled drivers build config 00:02:48.617 net/ring: not in enabled drivers build config 00:02:48.617 net/sfc: not in enabled drivers build config 00:02:48.617 net/softnic: not in enabled drivers build config 00:02:48.617 net/tap: not in enabled drivers build config 00:02:48.617 net/thunderx: not in enabled drivers build config 00:02:48.617 net/txgbe: not in enabled drivers build config 00:02:48.617 net/vdev_netvsc: not in enabled drivers build config 00:02:48.617 net/vhost: not in enabled drivers build config 00:02:48.617 net/virtio: not in enabled drivers build config 00:02:48.617 net/vmxnet3: not in enabled drivers build config 00:02:48.617 raw/*: missing internal dependency, "rawdev" 00:02:48.617 crypto/armv8: not in enabled drivers build config 00:02:48.617 crypto/bcmfs: not in enabled drivers build config 00:02:48.617 crypto/caam_jr: not in enabled drivers build config 00:02:48.617 crypto/ccp: not in enabled drivers build config 00:02:48.617 crypto/cnxk: not in enabled drivers build config 00:02:48.617 crypto/dpaa_sec: not in enabled drivers build config 00:02:48.617 crypto/dpaa2_sec: not in enabled drivers build config 00:02:48.617 crypto/ipsec_mb: not in enabled drivers build config 00:02:48.617 crypto/mlx5: not in enabled drivers build config 00:02:48.617 crypto/mvsam: not in enabled drivers build config 00:02:48.617 crypto/nitrox: not in enabled drivers build config 00:02:48.617 crypto/null: not in enabled drivers build config 00:02:48.617 crypto/octeontx: not in enabled drivers build config 00:02:48.617 crypto/openssl: not in enabled drivers build config 00:02:48.617 crypto/scheduler: not in enabled drivers build config 00:02:48.617 crypto/uadk: not in enabled drivers build config 00:02:48.617 crypto/virtio: not in enabled drivers build config 00:02:48.617 compress/isal: not in enabled drivers build config 00:02:48.617 compress/mlx5: not in enabled drivers build config 00:02:48.617 compress/octeontx: not in enabled drivers build config 00:02:48.617 compress/zlib: not in enabled drivers build config 00:02:48.617 regex/*: missing internal dependency, "regexdev" 00:02:48.617 ml/*: missing internal dependency, "mldev" 00:02:48.617 vdpa/ifc: not in enabled drivers build config 00:02:48.617 vdpa/mlx5: not in enabled drivers build config 00:02:48.617 vdpa/nfp: not in enabled drivers build config 00:02:48.617 vdpa/sfc: not in enabled drivers build config 00:02:48.617 event/*: missing internal dependency, "eventdev" 00:02:48.617 baseband/*: missing internal dependency, "bbdev" 00:02:48.617 gpu/*: missing internal dependency, "gpudev" 00:02:48.617 00:02:48.617 00:02:49.182 Build targets in project: 85 00:02:49.182 00:02:49.182 DPDK 23.11.0 00:02:49.182 00:02:49.182 User defined options 00:02:49.182 buildtype : debug 00:02:49.182 default_library : shared 00:02:49.182 libdir : lib 00:02:49.182 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:49.182 b_sanitize : address 00:02:49.182 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:49.182 c_link_args : 00:02:49.182 cpu_instruction_set: native 00:02:49.182 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:49.182 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:49.182 enable_docs : false 00:02:49.182 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:49.182 enable_kmods : false 00:02:49.182 tests : false 00:02:49.182 00:02:49.182 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:49.747 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:50.005 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:50.005 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:50.005 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:50.005 [4/265] Linking static target lib/librte_kvargs.a 00:02:50.005 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:50.005 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:50.263 [7/265] Linking static target lib/librte_log.a 00:02:50.263 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:50.263 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:50.263 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:50.522 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.087 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:51.345 [13/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.345 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:51.603 [15/265] Linking target lib/librte_log.so.24.0 00:02:51.603 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:51.603 [17/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:51.603 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:51.861 [19/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:51.862 [20/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:51.862 [21/265] Linking static target lib/librte_telemetry.a 00:02:51.862 [22/265] Linking target lib/librte_kvargs.so.24.0 00:02:52.119 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:52.119 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:52.376 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:52.376 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:52.376 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:52.634 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:52.891 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:52.891 [30/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.148 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:53.148 [32/265] Linking target lib/librte_telemetry.so.24.0 00:02:53.148 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:53.405 [34/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:53.405 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:53.663 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:53.663 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:53.663 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:53.920 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:53.920 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:53.920 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:53.920 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:54.178 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:54.178 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:54.435 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:54.435 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:54.693 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:54.693 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:54.951 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:55.208 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:55.465 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:55.465 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:55.722 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:55.722 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:55.722 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:55.722 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:55.980 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:56.237 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:56.237 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:56.237 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:56.237 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:56.237 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:56.495 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:56.495 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:57.060 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:57.060 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:57.316 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:57.316 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:57.316 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:57.574 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:57.574 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:57.574 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:57.831 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:57.831 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:57.831 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:57.831 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:58.088 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:58.088 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:58.344 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:58.601 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:58.859 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:59.116 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:59.373 [83/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:59.373 [84/265] Linking static target lib/librte_rcu.a 00:02:59.373 [85/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:59.373 [86/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:59.630 [87/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:59.630 [88/265] Linking static target lib/librte_ring.a 00:02:59.888 [89/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:59.888 [90/265] Linking static target lib/librte_eal.a 00:02:59.888 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:59.888 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:00.145 [93/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.402 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:00.402 [95/265] Linking static target lib/librte_mempool.a 00:03:00.402 [96/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.660 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:00.920 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:01.177 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:01.436 [100/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:01.436 [101/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:01.694 [102/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:01.694 [103/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:01.951 [104/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.951 [105/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:02.208 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:02.466 [107/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:02.466 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:02.724 [109/265] Linking static target lib/librte_net.a 00:03:02.724 [110/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:02.724 [111/265] Linking static target lib/librte_meter.a 00:03:02.982 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:02.982 [113/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:02.982 [114/265] Linking static target lib/librte_mbuf.a 00:03:03.240 [115/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.240 [116/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.498 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:03.756 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:04.013 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:04.271 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:04.558 [121/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.558 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:05.123 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:05.380 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:05.380 [125/265] Linking static target lib/librte_pci.a 00:03:05.380 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:05.637 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:05.637 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:05.637 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:05.637 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:05.637 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:05.894 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:05.894 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:05.894 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:05.894 [135/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.894 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:05.894 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:05.894 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:05.894 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:05.894 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:06.152 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:06.152 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:06.410 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:06.410 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:06.410 [145/265] Linking static target lib/librte_cmdline.a 00:03:06.668 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:06.925 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:07.183 [148/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:07.183 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:07.183 [150/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:07.183 [151/265] Linking static target lib/librte_timer.a 00:03:07.441 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:07.699 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:07.699 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:07.699 [155/265] Linking static target lib/librte_hash.a 00:03:07.699 [156/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:07.699 [157/265] Linking static target lib/librte_ethdev.a 00:03:07.699 [158/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:07.957 [159/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:08.214 [160/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:08.214 [161/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.214 [162/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:08.506 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:08.506 [164/265] Linking static target lib/librte_dmadev.a 00:03:08.506 [165/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:08.506 [166/265] Linking static target lib/librte_compressdev.a 00:03:08.506 [167/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.506 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:09.072 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:09.072 [170/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.072 [171/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:09.072 [172/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.329 [173/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:09.329 [174/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:09.586 [175/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.843 [176/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:09.843 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:09.843 [178/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:09.843 [179/265] Linking static target lib/librte_cryptodev.a 00:03:10.100 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:10.100 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:10.100 [182/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:10.100 [183/265] Linking static target lib/librte_power.a 00:03:10.357 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:10.357 [185/265] Linking static target lib/librte_reorder.a 00:03:10.615 [186/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:10.615 [187/265] Linking static target lib/librte_security.a 00:03:10.615 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:10.872 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:10.872 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:11.130 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.388 [192/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.646 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.215 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:12.215 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:12.215 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:12.215 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:12.215 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:12.523 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:12.523 [200/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.780 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:12.780 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:12.780 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:13.038 [204/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:13.038 [205/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:13.038 [206/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:13.296 [207/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:13.296 [208/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:13.296 [209/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:13.296 [210/265] Linking static target drivers/librte_bus_vdev.a 00:03:13.296 [211/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:13.296 [212/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:13.553 [213/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:13.553 [214/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.554 [215/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:13.554 [216/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:13.554 [217/265] Linking static target drivers/librte_bus_pci.a 00:03:14.119 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:14.119 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:14.119 [220/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.119 [221/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.119 [222/265] Linking target lib/librte_eal.so.24.0 00:03:14.119 [223/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:14.377 [224/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:14.377 [225/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:14.377 [226/265] Linking static target drivers/librte_mempool_ring.a 00:03:14.377 [227/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:14.377 [228/265] Linking target lib/librte_timer.so.24.0 00:03:14.377 [229/265] Linking target lib/librte_dmadev.so.24.0 00:03:14.377 [230/265] Linking target lib/librte_meter.so.24.0 00:03:14.377 [231/265] Linking target lib/librte_ring.so.24.0 00:03:14.377 [232/265] Linking target lib/librte_pci.so.24.0 00:03:14.377 [233/265] Linking target drivers/librte_bus_vdev.so.24.0 00:03:14.634 [234/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:14.634 [235/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:14.634 [236/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:14.634 [237/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:14.634 [238/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:14.634 [239/265] Linking target drivers/librte_bus_pci.so.24.0 00:03:14.634 [240/265] Linking target lib/librte_rcu.so.24.0 00:03:14.634 [241/265] Linking target lib/librte_mempool.so.24.0 00:03:14.892 [242/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:14.892 [243/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:14.892 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:14.892 [245/265] Linking target lib/librte_mbuf.so.24.0 00:03:14.892 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:03:15.150 [247/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:15.150 [248/265] Linking target lib/librte_compressdev.so.24.0 00:03:15.150 [249/265] Linking target lib/librte_reorder.so.24.0 00:03:15.150 [250/265] Linking target lib/librte_net.so.24.0 00:03:15.150 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:03:15.408 [252/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:15.408 [253/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:15.408 [254/265] Linking target lib/librte_cmdline.so.24.0 00:03:15.408 [255/265] Linking target lib/librte_hash.so.24.0 00:03:15.408 [256/265] Linking target lib/librte_security.so.24.0 00:03:15.666 [257/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:15.924 [258/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.182 [259/265] Linking target lib/librte_ethdev.so.24.0 00:03:16.470 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:16.470 [261/265] Linking target lib/librte_power.so.24.0 00:03:20.652 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:20.652 [263/265] Linking static target lib/librte_vhost.a 00:03:22.024 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.024 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:22.024 INFO: autodetecting backend as ninja 00:03:22.024 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:22.956 CC lib/log/log.o 00:03:22.956 CC lib/log/log_flags.o 00:03:22.956 CC lib/log/log_deprecated.o 00:03:22.956 CC lib/ut_mock/mock.o 00:03:22.956 CC lib/ut/ut.o 00:03:23.214 LIB libspdk_ut_mock.a 00:03:23.214 SO libspdk_ut_mock.so.5.0 00:03:23.214 LIB libspdk_log.a 00:03:23.214 LIB libspdk_ut.a 00:03:23.214 SO libspdk_log.so.6.1 00:03:23.214 SYMLINK libspdk_ut_mock.so 00:03:23.214 SO libspdk_ut.so.1.0 00:03:23.473 SYMLINK libspdk_log.so 00:03:23.473 SYMLINK libspdk_ut.so 00:03:23.473 CC lib/util/bit_array.o 00:03:23.473 CC lib/util/base64.o 00:03:23.473 CC lib/dma/dma.o 00:03:23.473 CC lib/util/cpuset.o 00:03:23.473 CC lib/util/crc16.o 00:03:23.473 CC lib/util/crc32.o 00:03:23.473 CC lib/util/crc32c.o 00:03:23.473 CXX lib/trace_parser/trace.o 00:03:23.473 CC lib/ioat/ioat.o 00:03:23.473 CC lib/vfio_user/host/vfio_user_pci.o 00:03:23.730 CC lib/util/crc32_ieee.o 00:03:23.730 CC lib/util/crc64.o 00:03:23.730 CC lib/util/dif.o 00:03:23.730 LIB libspdk_dma.a 00:03:23.730 CC lib/vfio_user/host/vfio_user.o 00:03:23.730 SO libspdk_dma.so.3.0 00:03:23.730 CC lib/util/fd.o 00:03:23.730 SYMLINK libspdk_dma.so 00:03:23.730 CC lib/util/file.o 00:03:23.730 CC lib/util/hexlify.o 00:03:23.730 CC lib/util/iov.o 00:03:23.730 CC lib/util/math.o 00:03:23.988 CC lib/util/pipe.o 00:03:23.988 CC lib/util/strerror_tls.o 00:03:23.988 CC lib/util/string.o 00:03:23.988 CC lib/util/uuid.o 00:03:23.988 LIB libspdk_ioat.a 00:03:23.988 LIB libspdk_vfio_user.a 00:03:23.988 SO libspdk_ioat.so.6.0 00:03:23.988 SO libspdk_vfio_user.so.4.0 00:03:23.988 CC lib/util/fd_group.o 00:03:23.988 SYMLINK libspdk_ioat.so 00:03:23.988 SYMLINK libspdk_vfio_user.so 00:03:23.988 CC lib/util/xor.o 00:03:23.988 CC lib/util/zipf.o 00:03:24.557 LIB libspdk_util.a 00:03:24.815 LIB libspdk_trace_parser.a 00:03:24.815 SO libspdk_util.so.8.0 00:03:24.815 SO libspdk_trace_parser.so.4.0 00:03:25.073 SYMLINK libspdk_trace_parser.so 00:03:25.073 SYMLINK libspdk_util.so 00:03:25.073 CC lib/vmd/vmd.o 00:03:25.073 CC lib/vmd/led.o 00:03:25.073 CC lib/conf/conf.o 00:03:25.073 CC lib/rdma/common.o 00:03:25.073 CC lib/rdma/rdma_verbs.o 00:03:25.073 CC lib/env_dpdk/env.o 00:03:25.073 CC lib/env_dpdk/memory.o 00:03:25.073 CC lib/env_dpdk/pci.o 00:03:25.073 CC lib/json/json_parse.o 00:03:25.073 CC lib/idxd/idxd.o 00:03:25.331 CC lib/json/json_util.o 00:03:25.588 CC lib/json/json_write.o 00:03:25.588 LIB libspdk_conf.a 00:03:25.588 CC lib/idxd/idxd_user.o 00:03:25.588 SO libspdk_conf.so.5.0 00:03:25.847 LIB libspdk_rdma.a 00:03:25.847 SYMLINK libspdk_conf.so 00:03:25.847 CC lib/idxd/idxd_kernel.o 00:03:25.847 SO libspdk_rdma.so.5.0 00:03:25.847 SYMLINK libspdk_rdma.so 00:03:25.847 CC lib/env_dpdk/init.o 00:03:25.847 CC lib/env_dpdk/threads.o 00:03:25.847 CC lib/env_dpdk/pci_ioat.o 00:03:26.105 CC lib/env_dpdk/pci_virtio.o 00:03:26.105 CC lib/env_dpdk/pci_vmd.o 00:03:26.105 CC lib/env_dpdk/pci_idxd.o 00:03:26.105 CC lib/env_dpdk/pci_event.o 00:03:26.105 LIB libspdk_json.a 00:03:26.363 SO libspdk_json.so.5.1 00:03:26.363 CC lib/env_dpdk/sigbus_handler.o 00:03:26.363 CC lib/env_dpdk/pci_dpdk.o 00:03:26.363 SYMLINK libspdk_json.so 00:03:26.363 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:26.363 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:26.363 LIB libspdk_idxd.a 00:03:26.620 SO libspdk_idxd.so.11.0 00:03:26.620 CC lib/jsonrpc/jsonrpc_server.o 00:03:26.620 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:26.620 CC lib/jsonrpc/jsonrpc_client.o 00:03:26.620 LIB libspdk_vmd.a 00:03:26.620 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:26.620 SYMLINK libspdk_idxd.so 00:03:26.620 SO libspdk_vmd.so.5.0 00:03:26.878 SYMLINK libspdk_vmd.so 00:03:27.135 LIB libspdk_jsonrpc.a 00:03:27.135 SO libspdk_jsonrpc.so.5.1 00:03:27.135 SYMLINK libspdk_jsonrpc.so 00:03:27.392 CC lib/rpc/rpc.o 00:03:27.649 LIB libspdk_rpc.a 00:03:27.649 SO libspdk_rpc.so.5.0 00:03:27.649 SYMLINK libspdk_rpc.so 00:03:27.912 CC lib/notify/notify.o 00:03:27.912 CC lib/notify/notify_rpc.o 00:03:27.912 CC lib/trace/trace.o 00:03:27.912 CC lib/trace/trace_rpc.o 00:03:27.912 CC lib/trace/trace_flags.o 00:03:27.912 CC lib/sock/sock_rpc.o 00:03:27.912 CC lib/sock/sock.o 00:03:27.912 LIB libspdk_env_dpdk.a 00:03:28.169 SO libspdk_env_dpdk.so.13.0 00:03:28.169 LIB libspdk_notify.a 00:03:28.169 SO libspdk_notify.so.5.0 00:03:28.426 LIB libspdk_trace.a 00:03:28.426 SYMLINK libspdk_notify.so 00:03:28.426 SO libspdk_trace.so.9.0 00:03:28.426 SYMLINK libspdk_env_dpdk.so 00:03:28.426 LIB libspdk_sock.a 00:03:28.426 SYMLINK libspdk_trace.so 00:03:28.426 SO libspdk_sock.so.8.0 00:03:28.683 SYMLINK libspdk_sock.so 00:03:28.683 CC lib/thread/thread.o 00:03:28.683 CC lib/thread/iobuf.o 00:03:28.683 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:28.683 CC lib/nvme/nvme_ctrlr.o 00:03:28.683 CC lib/nvme/nvme_fabric.o 00:03:28.683 CC lib/nvme/nvme_ns_cmd.o 00:03:28.683 CC lib/nvme/nvme_ns.o 00:03:28.683 CC lib/nvme/nvme_pcie_common.o 00:03:28.683 CC lib/nvme/nvme_pcie.o 00:03:28.683 CC lib/nvme/nvme_qpair.o 00:03:29.247 CC lib/nvme/nvme.o 00:03:29.813 CC lib/nvme/nvme_quirks.o 00:03:29.813 CC lib/nvme/nvme_transport.o 00:03:29.813 CC lib/nvme/nvme_discovery.o 00:03:29.813 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:30.072 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:30.072 CC lib/nvme/nvme_tcp.o 00:03:30.330 CC lib/nvme/nvme_opal.o 00:03:30.330 CC lib/nvme/nvme_io_msg.o 00:03:30.330 CC lib/nvme/nvme_poll_group.o 00:03:30.588 CC lib/nvme/nvme_zns.o 00:03:30.588 CC lib/nvme/nvme_cuse.o 00:03:30.845 CC lib/nvme/nvme_vfio_user.o 00:03:30.845 CC lib/nvme/nvme_rdma.o 00:03:31.102 LIB libspdk_thread.a 00:03:31.368 SO libspdk_thread.so.9.0 00:03:31.368 SYMLINK libspdk_thread.so 00:03:31.368 CC lib/accel/accel.o 00:03:31.368 CC lib/virtio/virtio.o 00:03:31.368 CC lib/virtio/virtio_vhost_user.o 00:03:31.626 CC lib/init/json_config.o 00:03:31.626 CC lib/blob/blobstore.o 00:03:31.626 CC lib/blob/request.o 00:03:31.626 CC lib/blob/zeroes.o 00:03:31.885 CC lib/blob/blob_bs_dev.o 00:03:31.885 CC lib/accel/accel_rpc.o 00:03:31.885 CC lib/init/subsystem.o 00:03:31.885 CC lib/virtio/virtio_vfio_user.o 00:03:32.143 CC lib/virtio/virtio_pci.o 00:03:32.143 CC lib/accel/accel_sw.o 00:03:32.143 CC lib/init/subsystem_rpc.o 00:03:32.143 CC lib/init/rpc.o 00:03:32.400 LIB libspdk_init.a 00:03:32.400 SO libspdk_init.so.4.0 00:03:32.400 SYMLINK libspdk_init.so 00:03:32.658 CC lib/event/app.o 00:03:32.658 CC lib/event/reactor.o 00:03:32.658 CC lib/event/log_rpc.o 00:03:32.658 CC lib/event/app_rpc.o 00:03:32.658 CC lib/event/scheduler_static.o 00:03:32.658 LIB libspdk_virtio.a 00:03:32.916 SO libspdk_virtio.so.6.0 00:03:32.916 LIB libspdk_accel.a 00:03:32.916 SYMLINK libspdk_virtio.so 00:03:32.916 SO libspdk_accel.so.14.0 00:03:32.916 LIB libspdk_nvme.a 00:03:33.174 SYMLINK libspdk_accel.so 00:03:33.174 SO libspdk_nvme.so.12.0 00:03:33.174 CC lib/bdev/bdev.o 00:03:33.174 CC lib/bdev/bdev_rpc.o 00:03:33.174 CC lib/bdev/bdev_zone.o 00:03:33.174 CC lib/bdev/part.o 00:03:33.174 CC lib/bdev/scsi_nvme.o 00:03:33.430 LIB libspdk_event.a 00:03:33.430 SO libspdk_event.so.12.0 00:03:33.688 SYMLINK libspdk_event.so 00:03:33.946 SYMLINK libspdk_nvme.so 00:03:36.474 LIB libspdk_blob.a 00:03:36.474 SO libspdk_blob.so.10.1 00:03:36.474 SYMLINK libspdk_blob.so 00:03:36.474 CC lib/lvol/lvol.o 00:03:36.474 CC lib/blobfs/blobfs.o 00:03:36.474 CC lib/blobfs/tree.o 00:03:37.039 LIB libspdk_bdev.a 00:03:37.297 SO libspdk_bdev.so.14.0 00:03:37.297 SYMLINK libspdk_bdev.so 00:03:37.555 CC lib/ftl/ftl_layout.o 00:03:37.555 CC lib/ublk/ublk_rpc.o 00:03:37.555 CC lib/ublk/ublk.o 00:03:37.555 CC lib/ftl/ftl_core.o 00:03:37.555 CC lib/ftl/ftl_init.o 00:03:37.555 CC lib/nvmf/ctrlr.o 00:03:37.555 CC lib/nbd/nbd.o 00:03:37.555 CC lib/scsi/dev.o 00:03:37.555 LIB libspdk_blobfs.a 00:03:37.555 SO libspdk_blobfs.so.9.0 00:03:37.813 LIB libspdk_lvol.a 00:03:37.813 CC lib/nbd/nbd_rpc.o 00:03:37.813 SO libspdk_lvol.so.9.1 00:03:37.813 SYMLINK libspdk_blobfs.so 00:03:37.813 CC lib/nvmf/ctrlr_discovery.o 00:03:37.813 SYMLINK libspdk_lvol.so 00:03:37.813 CC lib/nvmf/ctrlr_bdev.o 00:03:37.813 CC lib/ftl/ftl_debug.o 00:03:37.813 CC lib/ftl/ftl_io.o 00:03:37.813 CC lib/ftl/ftl_sb.o 00:03:38.070 CC lib/scsi/lun.o 00:03:38.070 LIB libspdk_nbd.a 00:03:38.070 SO libspdk_nbd.so.6.0 00:03:38.328 SYMLINK libspdk_nbd.so 00:03:38.328 CC lib/scsi/port.o 00:03:38.328 CC lib/scsi/scsi.o 00:03:38.328 CC lib/ftl/ftl_l2p.o 00:03:38.328 CC lib/ftl/ftl_l2p_flat.o 00:03:38.328 CC lib/nvmf/subsystem.o 00:03:38.328 CC lib/nvmf/nvmf.o 00:03:38.328 CC lib/nvmf/nvmf_rpc.o 00:03:38.587 CC lib/nvmf/transport.o 00:03:38.587 LIB libspdk_ublk.a 00:03:38.587 CC lib/scsi/scsi_bdev.o 00:03:38.587 SO libspdk_ublk.so.2.0 00:03:38.587 CC lib/scsi/scsi_pr.o 00:03:38.587 CC lib/ftl/ftl_nv_cache.o 00:03:38.845 SYMLINK libspdk_ublk.so 00:03:38.845 CC lib/ftl/ftl_band.o 00:03:38.845 CC lib/ftl/ftl_band_ops.o 00:03:39.411 CC lib/ftl/ftl_writer.o 00:03:39.411 CC lib/scsi/scsi_rpc.o 00:03:39.669 CC lib/ftl/ftl_rq.o 00:03:39.669 CC lib/nvmf/tcp.o 00:03:39.669 CC lib/scsi/task.o 00:03:39.927 CC lib/ftl/ftl_reloc.o 00:03:39.927 CC lib/ftl/ftl_l2p_cache.o 00:03:39.927 LIB libspdk_scsi.a 00:03:39.927 CC lib/nvmf/rdma.o 00:03:39.927 SO libspdk_scsi.so.8.0 00:03:40.184 CC lib/ftl/ftl_p2l.o 00:03:40.184 SYMLINK libspdk_scsi.so 00:03:40.184 CC lib/ftl/mngt/ftl_mngt.o 00:03:40.184 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:40.442 CC lib/iscsi/conn.o 00:03:40.699 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:40.699 CC lib/iscsi/init_grp.o 00:03:40.699 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:40.956 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:40.956 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:40.956 CC lib/vhost/vhost.o 00:03:40.956 CC lib/vhost/vhost_rpc.o 00:03:41.214 CC lib/vhost/vhost_scsi.o 00:03:41.214 CC lib/vhost/vhost_blk.o 00:03:41.214 CC lib/vhost/rte_vhost_user.o 00:03:41.472 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:41.472 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:41.730 CC lib/iscsi/iscsi.o 00:03:41.730 CC lib/iscsi/md5.o 00:03:41.730 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:42.005 CC lib/iscsi/param.o 00:03:42.276 CC lib/iscsi/portal_grp.o 00:03:42.276 CC lib/iscsi/tgt_node.o 00:03:42.276 CC lib/iscsi/iscsi_subsystem.o 00:03:42.276 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:42.534 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:42.792 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:42.792 CC lib/iscsi/iscsi_rpc.o 00:03:42.792 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:42.792 CC lib/ftl/utils/ftl_conf.o 00:03:43.050 LIB libspdk_vhost.a 00:03:43.050 CC lib/ftl/utils/ftl_md.o 00:03:43.050 CC lib/iscsi/task.o 00:03:43.050 CC lib/ftl/utils/ftl_mempool.o 00:03:43.050 SO libspdk_vhost.so.7.1 00:03:43.050 CC lib/ftl/utils/ftl_bitmap.o 00:03:43.050 CC lib/ftl/utils/ftl_property.o 00:03:43.308 SYMLINK libspdk_vhost.so 00:03:43.308 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:43.308 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:43.308 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:43.308 LIB libspdk_nvmf.a 00:03:43.566 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:43.566 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:43.566 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:43.566 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:43.566 SO libspdk_nvmf.so.17.0 00:03:43.566 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:43.566 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:43.566 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:43.566 CC lib/ftl/base/ftl_base_dev.o 00:03:43.824 CC lib/ftl/base/ftl_base_bdev.o 00:03:43.824 CC lib/ftl/ftl_trace.o 00:03:43.824 SYMLINK libspdk_nvmf.so 00:03:44.082 LIB libspdk_ftl.a 00:03:44.340 SO libspdk_ftl.so.8.0 00:03:44.340 LIB libspdk_iscsi.a 00:03:44.598 SO libspdk_iscsi.so.7.0 00:03:44.856 SYMLINK libspdk_ftl.so 00:03:44.856 SYMLINK libspdk_iscsi.so 00:03:44.856 CC module/env_dpdk/env_dpdk_rpc.o 00:03:45.115 CC module/accel/iaa/accel_iaa.o 00:03:45.115 CC module/sock/posix/posix.o 00:03:45.115 CC module/accel/error/accel_error.o 00:03:45.115 CC module/blob/bdev/blob_bdev.o 00:03:45.115 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:45.115 CC module/accel/ioat/accel_ioat.o 00:03:45.115 CC module/accel/dsa/accel_dsa.o 00:03:45.115 CC module/scheduler/gscheduler/gscheduler.o 00:03:45.115 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:45.115 LIB libspdk_env_dpdk_rpc.a 00:03:45.373 SO libspdk_env_dpdk_rpc.so.5.0 00:03:45.373 LIB libspdk_scheduler_dynamic.a 00:03:45.373 CC module/accel/iaa/accel_iaa_rpc.o 00:03:45.373 CC module/accel/error/accel_error_rpc.o 00:03:45.373 SO libspdk_scheduler_dynamic.so.3.0 00:03:45.373 LIB libspdk_scheduler_gscheduler.a 00:03:45.373 LIB libspdk_scheduler_dpdk_governor.a 00:03:45.373 SYMLINK libspdk_env_dpdk_rpc.so 00:03:45.373 CC module/accel/dsa/accel_dsa_rpc.o 00:03:45.373 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:45.373 SO libspdk_scheduler_gscheduler.so.3.0 00:03:45.373 SYMLINK libspdk_scheduler_dynamic.so 00:03:45.373 CC module/accel/ioat/accel_ioat_rpc.o 00:03:45.373 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:45.373 SYMLINK libspdk_scheduler_gscheduler.so 00:03:45.373 LIB libspdk_accel_error.a 00:03:45.643 LIB libspdk_blob_bdev.a 00:03:45.643 LIB libspdk_accel_iaa.a 00:03:45.643 SO libspdk_accel_error.so.1.0 00:03:45.644 SO libspdk_blob_bdev.so.10.1 00:03:45.644 SO libspdk_accel_iaa.so.2.0 00:03:45.644 LIB libspdk_accel_dsa.a 00:03:45.644 LIB libspdk_accel_ioat.a 00:03:45.644 SYMLINK libspdk_blob_bdev.so 00:03:45.644 SO libspdk_accel_dsa.so.4.0 00:03:45.644 SYMLINK libspdk_accel_error.so 00:03:45.644 SO libspdk_accel_ioat.so.5.0 00:03:45.644 SYMLINK libspdk_accel_iaa.so 00:03:45.644 SYMLINK libspdk_accel_dsa.so 00:03:45.644 SYMLINK libspdk_accel_ioat.so 00:03:45.913 CC module/blobfs/bdev/blobfs_bdev.o 00:03:45.913 CC module/bdev/null/bdev_null.o 00:03:45.913 CC module/bdev/malloc/bdev_malloc.o 00:03:45.913 CC module/bdev/gpt/gpt.o 00:03:45.913 CC module/bdev/error/vbdev_error.o 00:03:45.913 CC module/bdev/lvol/vbdev_lvol.o 00:03:45.913 CC module/bdev/delay/vbdev_delay.o 00:03:45.913 CC module/bdev/nvme/bdev_nvme.o 00:03:45.913 CC module/bdev/passthru/vbdev_passthru.o 00:03:45.913 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:45.913 CC module/bdev/gpt/vbdev_gpt.o 00:03:46.171 CC module/bdev/null/bdev_null_rpc.o 00:03:46.171 LIB libspdk_sock_posix.a 00:03:46.171 LIB libspdk_blobfs_bdev.a 00:03:46.171 SO libspdk_sock_posix.so.5.0 00:03:46.429 SO libspdk_blobfs_bdev.so.5.0 00:03:46.429 CC module/bdev/error/vbdev_error_rpc.o 00:03:46.429 SYMLINK libspdk_sock_posix.so 00:03:46.429 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:46.429 SYMLINK libspdk_blobfs_bdev.so 00:03:46.429 LIB libspdk_bdev_null.a 00:03:46.429 LIB libspdk_bdev_gpt.a 00:03:46.429 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:46.429 SO libspdk_bdev_null.so.5.0 00:03:46.429 SO libspdk_bdev_gpt.so.5.0 00:03:46.429 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:46.429 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:46.687 SYMLINK libspdk_bdev_null.so 00:03:46.687 SYMLINK libspdk_bdev_gpt.so 00:03:46.687 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:46.687 CC module/bdev/raid/bdev_raid.o 00:03:46.687 CC module/bdev/raid/bdev_raid_rpc.o 00:03:46.687 LIB libspdk_bdev_passthru.a 00:03:46.687 LIB libspdk_bdev_error.a 00:03:46.687 SO libspdk_bdev_passthru.so.5.0 00:03:46.687 LIB libspdk_bdev_malloc.a 00:03:46.687 SO libspdk_bdev_error.so.5.0 00:03:46.687 CC module/bdev/split/vbdev_split.o 00:03:46.687 SO libspdk_bdev_malloc.so.5.0 00:03:46.687 LIB libspdk_bdev_lvol.a 00:03:46.687 SYMLINK libspdk_bdev_error.so 00:03:46.687 SYMLINK libspdk_bdev_passthru.so 00:03:46.687 CC module/bdev/split/vbdev_split_rpc.o 00:03:46.687 CC module/bdev/nvme/nvme_rpc.o 00:03:46.687 SYMLINK libspdk_bdev_malloc.so 00:03:46.945 CC module/bdev/raid/bdev_raid_sb.o 00:03:46.945 SO libspdk_bdev_lvol.so.5.0 00:03:46.945 LIB libspdk_bdev_delay.a 00:03:46.945 SO libspdk_bdev_delay.so.5.0 00:03:46.945 SYMLINK libspdk_bdev_lvol.so 00:03:46.945 CC module/bdev/nvme/bdev_mdns_client.o 00:03:46.945 SYMLINK libspdk_bdev_delay.so 00:03:46.945 CC module/bdev/nvme/vbdev_opal.o 00:03:46.945 CC module/bdev/raid/raid0.o 00:03:47.203 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:47.203 LIB libspdk_bdev_split.a 00:03:47.203 CC module/bdev/raid/raid1.o 00:03:47.203 SO libspdk_bdev_split.so.5.0 00:03:47.203 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:47.203 SYMLINK libspdk_bdev_split.so 00:03:47.461 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:47.461 CC module/bdev/xnvme/bdev_xnvme.o 00:03:47.461 CC module/bdev/aio/bdev_aio.o 00:03:47.461 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:47.461 CC module/bdev/raid/concat.o 00:03:47.719 CC module/bdev/ftl/bdev_ftl.o 00:03:47.719 CC module/bdev/iscsi/bdev_iscsi.o 00:03:47.719 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:47.719 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:47.719 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:47.976 CC module/bdev/aio/bdev_aio_rpc.o 00:03:47.976 LIB libspdk_bdev_xnvme.a 00:03:47.976 SO libspdk_bdev_xnvme.so.2.0 00:03:47.976 LIB libspdk_bdev_raid.a 00:03:47.976 LIB libspdk_bdev_zone_block.a 00:03:47.976 SO libspdk_bdev_raid.so.5.0 00:03:47.976 SYMLINK libspdk_bdev_xnvme.so 00:03:47.976 SO libspdk_bdev_zone_block.so.5.0 00:03:48.234 LIB libspdk_bdev_iscsi.a 00:03:48.234 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:48.234 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:48.234 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:48.234 SO libspdk_bdev_iscsi.so.5.0 00:03:48.234 SYMLINK libspdk_bdev_raid.so 00:03:48.234 SYMLINK libspdk_bdev_zone_block.so 00:03:48.234 LIB libspdk_bdev_aio.a 00:03:48.234 SYMLINK libspdk_bdev_iscsi.so 00:03:48.234 SO libspdk_bdev_aio.so.5.0 00:03:48.234 LIB libspdk_bdev_ftl.a 00:03:48.234 SO libspdk_bdev_ftl.so.5.0 00:03:48.491 SYMLINK libspdk_bdev_aio.so 00:03:48.491 SYMLINK libspdk_bdev_ftl.so 00:03:48.748 LIB libspdk_bdev_virtio.a 00:03:48.748 SO libspdk_bdev_virtio.so.5.0 00:03:49.006 SYMLINK libspdk_bdev_virtio.so 00:03:49.940 LIB libspdk_bdev_nvme.a 00:03:49.940 SO libspdk_bdev_nvme.so.6.0 00:03:50.197 SYMLINK libspdk_bdev_nvme.so 00:03:50.454 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:50.454 CC module/event/subsystems/sock/sock.o 00:03:50.454 CC module/event/subsystems/vmd/vmd.o 00:03:50.454 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:50.454 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:50.454 CC module/event/subsystems/iobuf/iobuf.o 00:03:50.454 CC module/event/subsystems/scheduler/scheduler.o 00:03:50.454 LIB libspdk_event_vhost_blk.a 00:03:50.454 LIB libspdk_event_sock.a 00:03:50.454 LIB libspdk_event_scheduler.a 00:03:50.454 SO libspdk_event_sock.so.4.0 00:03:50.454 SO libspdk_event_vhost_blk.so.2.0 00:03:50.454 LIB libspdk_event_iobuf.a 00:03:50.454 SO libspdk_event_scheduler.so.3.0 00:03:50.712 LIB libspdk_event_vmd.a 00:03:50.712 SO libspdk_event_vmd.so.5.0 00:03:50.712 SO libspdk_event_iobuf.so.2.0 00:03:50.712 SYMLINK libspdk_event_sock.so 00:03:50.712 SYMLINK libspdk_event_scheduler.so 00:03:50.712 SYMLINK libspdk_event_vhost_blk.so 00:03:50.712 SYMLINK libspdk_event_vmd.so 00:03:50.712 SYMLINK libspdk_event_iobuf.so 00:03:50.969 CC module/event/subsystems/accel/accel.o 00:03:50.969 LIB libspdk_event_accel.a 00:03:50.969 SO libspdk_event_accel.so.5.0 00:03:51.227 SYMLINK libspdk_event_accel.so 00:03:51.227 CC module/event/subsystems/bdev/bdev.o 00:03:51.485 LIB libspdk_event_bdev.a 00:03:51.485 SO libspdk_event_bdev.so.5.0 00:03:51.742 SYMLINK libspdk_event_bdev.so 00:03:51.742 CC module/event/subsystems/ublk/ublk.o 00:03:51.742 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:51.742 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:51.742 CC module/event/subsystems/scsi/scsi.o 00:03:51.742 CC module/event/subsystems/nbd/nbd.o 00:03:51.998 LIB libspdk_event_ublk.a 00:03:51.998 SO libspdk_event_ublk.so.2.0 00:03:51.998 LIB libspdk_event_nbd.a 00:03:51.998 LIB libspdk_event_scsi.a 00:03:51.998 SO libspdk_event_nbd.so.5.0 00:03:51.998 SO libspdk_event_scsi.so.5.0 00:03:51.998 SYMLINK libspdk_event_ublk.so 00:03:51.998 SYMLINK libspdk_event_nbd.so 00:03:51.998 LIB libspdk_event_nvmf.a 00:03:51.998 SYMLINK libspdk_event_scsi.so 00:03:51.998 SO libspdk_event_nvmf.so.5.0 00:03:52.255 SYMLINK libspdk_event_nvmf.so 00:03:52.255 CC module/event/subsystems/iscsi/iscsi.o 00:03:52.255 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:52.513 LIB libspdk_event_vhost_scsi.a 00:03:52.513 LIB libspdk_event_iscsi.a 00:03:52.513 SO libspdk_event_iscsi.so.5.0 00:03:52.513 SO libspdk_event_vhost_scsi.so.2.0 00:03:52.513 SYMLINK libspdk_event_iscsi.so 00:03:52.513 SYMLINK libspdk_event_vhost_scsi.so 00:03:52.513 SO libspdk.so.5.0 00:03:52.513 SYMLINK libspdk.so 00:03:52.771 CXX app/trace/trace.o 00:03:52.771 CC examples/nvme/hello_world/hello_world.o 00:03:52.771 CC examples/sock/hello_world/hello_sock.o 00:03:52.771 CC examples/ioat/perf/perf.o 00:03:52.771 CC examples/accel/perf/accel_perf.o 00:03:52.771 CC examples/blob/hello_world/hello_blob.o 00:03:53.029 CC test/accel/dif/dif.o 00:03:53.029 CC examples/bdev/hello_world/hello_bdev.o 00:03:53.029 CC test/bdev/bdevio/bdevio.o 00:03:53.029 CC test/app/bdev_svc/bdev_svc.o 00:03:53.029 LINK hello_world 00:03:53.029 LINK hello_blob 00:03:53.287 LINK bdev_svc 00:03:53.287 LINK ioat_perf 00:03:53.287 LINK hello_sock 00:03:53.287 LINK spdk_trace 00:03:53.287 LINK hello_bdev 00:03:53.287 CC examples/nvme/reconnect/reconnect.o 00:03:53.544 CC examples/ioat/verify/verify.o 00:03:53.544 CC examples/blob/cli/blobcli.o 00:03:53.544 LINK bdevio 00:03:53.544 LINK dif 00:03:53.544 LINK accel_perf 00:03:53.544 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:53.544 CC app/trace_record/trace_record.o 00:03:53.544 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:53.544 CC examples/bdev/bdevperf/bdevperf.o 00:03:53.802 LINK verify 00:03:53.802 CC test/app/histogram_perf/histogram_perf.o 00:03:53.802 CC examples/nvme/arbitration/arbitration.o 00:03:53.802 CC examples/nvme/hotplug/hotplug.o 00:03:53.802 LINK reconnect 00:03:53.802 LINK spdk_trace_record 00:03:53.802 LINK histogram_perf 00:03:53.802 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:54.060 LINK hotplug 00:03:54.060 LINK blobcli 00:03:54.060 LINK nvme_fuzz 00:03:54.060 CC app/nvmf_tgt/nvmf_main.o 00:03:54.060 CC test/blobfs/mkfs/mkfs.o 00:03:54.060 LINK cmb_copy 00:03:54.060 LINK nvme_manage 00:03:54.060 LINK arbitration 00:03:54.060 CC app/iscsi_tgt/iscsi_tgt.o 00:03:54.319 LINK nvmf_tgt 00:03:54.319 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:54.319 CC app/spdk_tgt/spdk_tgt.o 00:03:54.319 LINK mkfs 00:03:54.319 CC examples/nvme/abort/abort.o 00:03:54.319 TEST_HEADER include/spdk/accel.h 00:03:54.319 TEST_HEADER include/spdk/accel_module.h 00:03:54.319 TEST_HEADER include/spdk/assert.h 00:03:54.319 TEST_HEADER include/spdk/barrier.h 00:03:54.319 TEST_HEADER include/spdk/base64.h 00:03:54.319 TEST_HEADER include/spdk/bdev.h 00:03:54.319 TEST_HEADER include/spdk/bdev_module.h 00:03:54.319 TEST_HEADER include/spdk/bdev_zone.h 00:03:54.319 TEST_HEADER include/spdk/bit_array.h 00:03:54.319 TEST_HEADER include/spdk/bit_pool.h 00:03:54.319 TEST_HEADER include/spdk/blob_bdev.h 00:03:54.319 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:54.319 TEST_HEADER include/spdk/blobfs.h 00:03:54.319 TEST_HEADER include/spdk/blob.h 00:03:54.319 TEST_HEADER include/spdk/conf.h 00:03:54.319 TEST_HEADER include/spdk/config.h 00:03:54.319 TEST_HEADER include/spdk/cpuset.h 00:03:54.319 LINK iscsi_tgt 00:03:54.319 TEST_HEADER include/spdk/crc16.h 00:03:54.319 TEST_HEADER include/spdk/crc32.h 00:03:54.319 TEST_HEADER include/spdk/crc64.h 00:03:54.319 TEST_HEADER include/spdk/dif.h 00:03:54.319 TEST_HEADER include/spdk/dma.h 00:03:54.319 TEST_HEADER include/spdk/endian.h 00:03:54.319 TEST_HEADER include/spdk/env_dpdk.h 00:03:54.319 TEST_HEADER include/spdk/env.h 00:03:54.319 TEST_HEADER include/spdk/event.h 00:03:54.319 TEST_HEADER include/spdk/fd_group.h 00:03:54.319 TEST_HEADER include/spdk/fd.h 00:03:54.319 TEST_HEADER include/spdk/file.h 00:03:54.319 TEST_HEADER include/spdk/ftl.h 00:03:54.319 TEST_HEADER include/spdk/gpt_spec.h 00:03:54.319 TEST_HEADER include/spdk/hexlify.h 00:03:54.319 TEST_HEADER include/spdk/histogram_data.h 00:03:54.319 TEST_HEADER include/spdk/idxd.h 00:03:54.319 TEST_HEADER include/spdk/idxd_spec.h 00:03:54.319 TEST_HEADER include/spdk/init.h 00:03:54.319 TEST_HEADER include/spdk/ioat.h 00:03:54.577 TEST_HEADER include/spdk/ioat_spec.h 00:03:54.577 TEST_HEADER include/spdk/iscsi_spec.h 00:03:54.577 TEST_HEADER include/spdk/json.h 00:03:54.577 TEST_HEADER include/spdk/jsonrpc.h 00:03:54.577 TEST_HEADER include/spdk/likely.h 00:03:54.577 TEST_HEADER include/spdk/log.h 00:03:54.577 TEST_HEADER include/spdk/lvol.h 00:03:54.577 TEST_HEADER include/spdk/memory.h 00:03:54.577 CC test/dma/test_dma/test_dma.o 00:03:54.577 TEST_HEADER include/spdk/mmio.h 00:03:54.577 TEST_HEADER include/spdk/nbd.h 00:03:54.577 TEST_HEADER include/spdk/notify.h 00:03:54.577 TEST_HEADER include/spdk/nvme.h 00:03:54.577 TEST_HEADER include/spdk/nvme_intel.h 00:03:54.577 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:54.578 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:54.578 TEST_HEADER include/spdk/nvme_spec.h 00:03:54.578 TEST_HEADER include/spdk/nvme_zns.h 00:03:54.578 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:54.578 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:54.578 TEST_HEADER include/spdk/nvmf.h 00:03:54.578 TEST_HEADER include/spdk/nvmf_spec.h 00:03:54.578 TEST_HEADER include/spdk/nvmf_transport.h 00:03:54.578 TEST_HEADER include/spdk/opal.h 00:03:54.578 TEST_HEADER include/spdk/opal_spec.h 00:03:54.578 TEST_HEADER include/spdk/pci_ids.h 00:03:54.578 TEST_HEADER include/spdk/pipe.h 00:03:54.578 TEST_HEADER include/spdk/queue.h 00:03:54.578 CC test/env/mem_callbacks/mem_callbacks.o 00:03:54.578 TEST_HEADER include/spdk/reduce.h 00:03:54.578 TEST_HEADER include/spdk/rpc.h 00:03:54.578 TEST_HEADER include/spdk/scheduler.h 00:03:54.578 TEST_HEADER include/spdk/scsi.h 00:03:54.578 TEST_HEADER include/spdk/scsi_spec.h 00:03:54.578 TEST_HEADER include/spdk/sock.h 00:03:54.578 TEST_HEADER include/spdk/stdinc.h 00:03:54.578 TEST_HEADER include/spdk/string.h 00:03:54.578 TEST_HEADER include/spdk/thread.h 00:03:54.578 TEST_HEADER include/spdk/trace.h 00:03:54.578 TEST_HEADER include/spdk/trace_parser.h 00:03:54.578 TEST_HEADER include/spdk/tree.h 00:03:54.578 TEST_HEADER include/spdk/ublk.h 00:03:54.578 TEST_HEADER include/spdk/util.h 00:03:54.578 TEST_HEADER include/spdk/uuid.h 00:03:54.578 TEST_HEADER include/spdk/version.h 00:03:54.578 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:54.578 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:54.578 TEST_HEADER include/spdk/vhost.h 00:03:54.578 TEST_HEADER include/spdk/vmd.h 00:03:54.578 TEST_HEADER include/spdk/xor.h 00:03:54.578 TEST_HEADER include/spdk/zipf.h 00:03:54.578 CXX test/cpp_headers/accel.o 00:03:54.578 CC test/env/vtophys/vtophys.o 00:03:54.578 LINK spdk_tgt 00:03:54.578 LINK bdevperf 00:03:54.578 CC app/spdk_lspci/spdk_lspci.o 00:03:54.578 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:54.837 LINK vtophys 00:03:54.837 CXX test/cpp_headers/accel_module.o 00:03:54.837 LINK spdk_lspci 00:03:54.837 CXX test/cpp_headers/assert.o 00:03:54.837 LINK abort 00:03:54.837 CXX test/cpp_headers/barrier.o 00:03:54.837 LINK env_dpdk_post_init 00:03:54.837 CXX test/cpp_headers/base64.o 00:03:54.837 LINK test_dma 00:03:55.100 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:55.100 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:55.100 CC app/spdk_nvme_perf/perf.o 00:03:55.100 CC app/spdk_nvme_identify/identify.o 00:03:55.100 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:55.100 CC test/env/memory/memory_ut.o 00:03:55.100 CXX test/cpp_headers/bdev.o 00:03:55.100 LINK mem_callbacks 00:03:55.100 CC app/spdk_nvme_discover/discovery_aer.o 00:03:55.100 CC app/spdk_top/spdk_top.o 00:03:55.100 LINK pmr_persistence 00:03:55.357 CXX test/cpp_headers/bdev_module.o 00:03:55.357 CC app/vhost/vhost.o 00:03:55.357 LINK spdk_nvme_discover 00:03:55.615 CXX test/cpp_headers/bdev_zone.o 00:03:55.615 LINK vhost_fuzz 00:03:55.615 CC examples/vmd/lsvmd/lsvmd.o 00:03:55.615 LINK vhost 00:03:55.615 CC examples/vmd/led/led.o 00:03:55.615 LINK lsvmd 00:03:55.615 CXX test/cpp_headers/bit_array.o 00:03:55.873 LINK led 00:03:55.873 CC examples/nvmf/nvmf/nvmf.o 00:03:55.873 CXX test/cpp_headers/bit_pool.o 00:03:55.873 CC app/spdk_dd/spdk_dd.o 00:03:55.873 CC app/fio/nvme/fio_plugin.o 00:03:56.131 CXX test/cpp_headers/blob_bdev.o 00:03:56.131 LINK spdk_nvme_identify 00:03:56.131 LINK spdk_nvme_perf 00:03:56.131 LINK memory_ut 00:03:56.131 CC examples/util/zipf/zipf.o 00:03:56.131 CXX test/cpp_headers/blobfs_bdev.o 00:03:56.131 LINK nvmf 00:03:56.390 LINK spdk_top 00:03:56.390 CXX test/cpp_headers/blobfs.o 00:03:56.390 LINK zipf 00:03:56.390 LINK spdk_dd 00:03:56.390 CC test/env/pci/pci_ut.o 00:03:56.390 CC examples/thread/thread/thread_ex.o 00:03:56.390 CXX test/cpp_headers/blob.o 00:03:56.390 LINK iscsi_fuzz 00:03:56.390 CC app/fio/bdev/fio_plugin.o 00:03:56.648 CC test/event/event_perf/event_perf.o 00:03:56.648 CXX test/cpp_headers/conf.o 00:03:56.648 CC test/nvme/aer/aer.o 00:03:56.648 CC test/nvme/reset/reset.o 00:03:56.648 LINK thread 00:03:56.648 LINK spdk_nvme 00:03:56.648 CC test/lvol/esnap/esnap.o 00:03:56.648 LINK event_perf 00:03:56.648 CXX test/cpp_headers/config.o 00:03:56.648 CC test/app/jsoncat/jsoncat.o 00:03:56.907 CXX test/cpp_headers/cpuset.o 00:03:56.907 LINK pci_ut 00:03:56.907 CXX test/cpp_headers/crc16.o 00:03:56.907 LINK jsoncat 00:03:56.907 CC examples/idxd/perf/perf.o 00:03:56.907 CC test/event/reactor/reactor.o 00:03:56.907 LINK reset 00:03:56.907 LINK aer 00:03:56.907 CC test/event/reactor_perf/reactor_perf.o 00:03:57.165 CXX test/cpp_headers/crc32.o 00:03:57.165 LINK spdk_bdev 00:03:57.165 LINK reactor 00:03:57.165 CC test/app/stub/stub.o 00:03:57.165 LINK reactor_perf 00:03:57.165 CC test/rpc_client/rpc_client_test.o 00:03:57.165 CC test/nvme/sgl/sgl.o 00:03:57.165 CXX test/cpp_headers/crc64.o 00:03:57.165 CC test/thread/poller_perf/poller_perf.o 00:03:57.424 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:57.424 CC test/event/app_repeat/app_repeat.o 00:03:57.424 LINK stub 00:03:57.424 LINK idxd_perf 00:03:57.424 CXX test/cpp_headers/dif.o 00:03:57.424 LINK rpc_client_test 00:03:57.424 CC test/event/scheduler/scheduler.o 00:03:57.424 LINK poller_perf 00:03:57.424 LINK app_repeat 00:03:57.424 LINK interrupt_tgt 00:03:57.424 LINK sgl 00:03:57.424 CXX test/cpp_headers/dma.o 00:03:57.682 CC test/nvme/e2edp/nvme_dp.o 00:03:57.682 CC test/nvme/overhead/overhead.o 00:03:57.682 CC test/nvme/err_injection/err_injection.o 00:03:57.682 CC test/nvme/startup/startup.o 00:03:57.682 CXX test/cpp_headers/endian.o 00:03:57.682 LINK scheduler 00:03:57.682 CC test/nvme/reserve/reserve.o 00:03:57.682 CC test/nvme/simple_copy/simple_copy.o 00:03:57.682 CC test/nvme/connect_stress/connect_stress.o 00:03:57.941 CXX test/cpp_headers/env_dpdk.o 00:03:57.941 LINK err_injection 00:03:57.941 LINK startup 00:03:57.941 LINK nvme_dp 00:03:57.941 CXX test/cpp_headers/env.o 00:03:57.941 LINK overhead 00:03:57.941 LINK connect_stress 00:03:57.941 LINK reserve 00:03:57.941 CXX test/cpp_headers/event.o 00:03:57.941 LINK simple_copy 00:03:57.941 CXX test/cpp_headers/fd_group.o 00:03:58.199 CC test/nvme/boot_partition/boot_partition.o 00:03:58.199 CC test/nvme/compliance/nvme_compliance.o 00:03:58.199 CC test/nvme/fused_ordering/fused_ordering.o 00:03:58.199 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:58.199 CXX test/cpp_headers/fd.o 00:03:58.199 CXX test/cpp_headers/file.o 00:03:58.199 CC test/nvme/fdp/fdp.o 00:03:58.199 CC test/nvme/cuse/cuse.o 00:03:58.199 CXX test/cpp_headers/ftl.o 00:03:58.199 LINK boot_partition 00:03:58.458 LINK doorbell_aers 00:03:58.458 LINK fused_ordering 00:03:58.458 CXX test/cpp_headers/gpt_spec.o 00:03:58.458 CXX test/cpp_headers/hexlify.o 00:03:58.458 CXX test/cpp_headers/histogram_data.o 00:03:58.458 CXX test/cpp_headers/idxd.o 00:03:58.458 LINK nvme_compliance 00:03:58.458 CXX test/cpp_headers/idxd_spec.o 00:03:58.458 CXX test/cpp_headers/init.o 00:03:58.458 CXX test/cpp_headers/ioat.o 00:03:58.458 CXX test/cpp_headers/ioat_spec.o 00:03:58.716 LINK fdp 00:03:58.716 CXX test/cpp_headers/iscsi_spec.o 00:03:58.716 CXX test/cpp_headers/json.o 00:03:58.716 CXX test/cpp_headers/jsonrpc.o 00:03:58.716 CXX test/cpp_headers/likely.o 00:03:58.716 CXX test/cpp_headers/log.o 00:03:58.716 CXX test/cpp_headers/lvol.o 00:03:58.716 CXX test/cpp_headers/memory.o 00:03:58.716 CXX test/cpp_headers/mmio.o 00:03:58.716 CXX test/cpp_headers/nbd.o 00:03:58.716 CXX test/cpp_headers/notify.o 00:03:58.716 CXX test/cpp_headers/nvme.o 00:03:58.975 CXX test/cpp_headers/nvme_intel.o 00:03:58.975 CXX test/cpp_headers/nvme_ocssd.o 00:03:58.975 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:58.975 CXX test/cpp_headers/nvme_spec.o 00:03:58.975 CXX test/cpp_headers/nvme_zns.o 00:03:58.975 CXX test/cpp_headers/nvmf_cmd.o 00:03:58.975 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:58.975 CXX test/cpp_headers/nvmf.o 00:03:58.975 CXX test/cpp_headers/nvmf_spec.o 00:03:58.975 CXX test/cpp_headers/nvmf_transport.o 00:03:58.975 CXX test/cpp_headers/opal.o 00:03:58.975 CXX test/cpp_headers/opal_spec.o 00:03:59.234 CXX test/cpp_headers/pci_ids.o 00:03:59.234 CXX test/cpp_headers/pipe.o 00:03:59.234 CXX test/cpp_headers/queue.o 00:03:59.234 CXX test/cpp_headers/reduce.o 00:03:59.234 CXX test/cpp_headers/rpc.o 00:03:59.234 CXX test/cpp_headers/scheduler.o 00:03:59.234 CXX test/cpp_headers/scsi.o 00:03:59.234 CXX test/cpp_headers/scsi_spec.o 00:03:59.234 CXX test/cpp_headers/sock.o 00:03:59.234 CXX test/cpp_headers/stdinc.o 00:03:59.234 CXX test/cpp_headers/string.o 00:03:59.493 CXX test/cpp_headers/thread.o 00:03:59.493 LINK cuse 00:03:59.493 CXX test/cpp_headers/trace.o 00:03:59.493 CXX test/cpp_headers/trace_parser.o 00:03:59.493 CXX test/cpp_headers/tree.o 00:03:59.493 CXX test/cpp_headers/ublk.o 00:03:59.493 CXX test/cpp_headers/util.o 00:03:59.493 CXX test/cpp_headers/uuid.o 00:03:59.493 CXX test/cpp_headers/version.o 00:03:59.493 CXX test/cpp_headers/vfio_user_pci.o 00:03:59.493 CXX test/cpp_headers/vfio_user_spec.o 00:03:59.493 CXX test/cpp_headers/vhost.o 00:03:59.493 CXX test/cpp_headers/vmd.o 00:03:59.751 CXX test/cpp_headers/xor.o 00:03:59.751 CXX test/cpp_headers/zipf.o 00:04:03.035 LINK esnap 00:04:03.035 00:04:03.035 real 1m32.583s 00:04:03.035 user 9m55.070s 00:04:03.035 sys 1m47.756s 00:04:03.035 15:29:24 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:03.035 15:29:24 -- common/autotest_common.sh@10 -- $ set +x 00:04:03.035 ************************************ 00:04:03.035 END TEST make 00:04:03.035 ************************************ 00:04:03.294 15:29:24 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:03.294 15:29:24 -- nvmf/common.sh@7 -- # uname -s 00:04:03.294 15:29:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:03.294 15:29:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:03.294 15:29:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:03.294 15:29:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:03.294 15:29:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:03.294 15:29:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:03.294 15:29:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:03.294 15:29:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:03.294 15:29:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:03.294 15:29:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:03.294 15:29:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f241b322-a8d6-49ac-997e-35dd184b3295 00:04:03.294 15:29:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=f241b322-a8d6-49ac-997e-35dd184b3295 00:04:03.294 15:29:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:03.294 15:29:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:03.294 15:29:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:03.294 15:29:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:03.294 15:29:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:03.294 15:29:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:03.294 15:29:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:03.295 15:29:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.295 15:29:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.295 15:29:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.295 15:29:24 -- paths/export.sh@5 -- # export PATH 00:04:03.295 15:29:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.295 15:29:24 -- nvmf/common.sh@46 -- # : 0 00:04:03.295 15:29:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:03.295 15:29:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:03.295 15:29:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:03.295 15:29:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:03.295 15:29:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:03.295 15:29:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:03.295 15:29:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:03.295 15:29:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:03.295 15:29:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:03.295 15:29:24 -- spdk/autotest.sh@32 -- # uname -s 00:04:03.295 15:29:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:03.295 15:29:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:03.295 15:29:24 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:03.295 15:29:24 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:03.295 15:29:24 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:03.295 15:29:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:03.295 15:29:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:03.295 15:29:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:03.295 15:29:24 -- spdk/autotest.sh@48 -- # udevadm_pid=48329 00:04:03.295 15:29:24 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:03.295 15:29:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:03.295 15:29:24 -- spdk/autotest.sh@54 -- # echo 48331 00:04:03.295 15:29:24 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:03.295 15:29:24 -- spdk/autotest.sh@56 -- # echo 48332 00:04:03.295 15:29:24 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:03.295 15:29:24 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:03.295 15:29:24 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:03.295 15:29:24 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:03.295 15:29:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:03.295 15:29:24 -- common/autotest_common.sh@10 -- # set +x 00:04:03.295 15:29:24 -- spdk/autotest.sh@70 -- # create_test_list 00:04:03.295 15:29:24 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:03.295 15:29:24 -- common/autotest_common.sh@10 -- # set +x 00:04:03.295 15:29:24 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:03.295 15:29:24 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:03.295 15:29:24 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:03.295 15:29:24 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:03.295 15:29:24 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:03.295 15:29:24 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:03.295 15:29:24 -- common/autotest_common.sh@1440 -- # uname 00:04:03.295 15:29:24 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:03.295 15:29:24 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:03.295 15:29:24 -- common/autotest_common.sh@1460 -- # uname 00:04:03.295 15:29:24 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:03.295 15:29:24 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:03.295 15:29:24 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:03.295 15:29:24 -- spdk/autotest.sh@83 -- # hash lcov 00:04:03.295 15:29:24 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:03.295 15:29:24 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:03.295 --rc lcov_branch_coverage=1 00:04:03.295 --rc lcov_function_coverage=1 00:04:03.295 --rc genhtml_branch_coverage=1 00:04:03.295 --rc genhtml_function_coverage=1 00:04:03.295 --rc genhtml_legend=1 00:04:03.295 --rc geninfo_all_blocks=1 00:04:03.295 ' 00:04:03.295 15:29:24 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:03.295 --rc lcov_branch_coverage=1 00:04:03.295 --rc lcov_function_coverage=1 00:04:03.295 --rc genhtml_branch_coverage=1 00:04:03.295 --rc genhtml_function_coverage=1 00:04:03.295 --rc genhtml_legend=1 00:04:03.295 --rc geninfo_all_blocks=1 00:04:03.295 ' 00:04:03.295 15:29:24 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:03.295 --rc lcov_branch_coverage=1 00:04:03.295 --rc lcov_function_coverage=1 00:04:03.295 --rc genhtml_branch_coverage=1 00:04:03.295 --rc genhtml_function_coverage=1 00:04:03.295 --rc genhtml_legend=1 00:04:03.295 --rc geninfo_all_blocks=1 00:04:03.295 --no-external' 00:04:03.295 15:29:24 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:03.295 --rc lcov_branch_coverage=1 00:04:03.295 --rc lcov_function_coverage=1 00:04:03.295 --rc genhtml_branch_coverage=1 00:04:03.295 --rc genhtml_function_coverage=1 00:04:03.295 --rc genhtml_legend=1 00:04:03.295 --rc geninfo_all_blocks=1 00:04:03.295 --no-external' 00:04:03.295 15:29:24 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:03.566 lcov: LCOV version 1.14 00:04:03.566 15:29:24 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:11.690 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:11.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:11.690 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:11.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:11.690 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:11.690 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:33.643 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:33.643 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:33.644 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:33.644 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:33.645 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:33.645 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:33.645 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:33.645 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:33.645 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:33.645 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:33.645 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:33.645 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:33.645 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:33.645 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:33.645 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:33.645 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:33.645 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:33.645 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:33.645 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:33.645 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:33.645 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:33.645 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:33.645 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:33.645 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:33.645 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:33.645 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:33.645 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:33.645 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:33.645 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:33.645 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:33.645 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:34.579 15:29:56 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:34.579 15:29:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:34.579 15:29:56 -- common/autotest_common.sh@10 -- # set +x 00:04:34.579 15:29:56 -- spdk/autotest.sh@102 -- # rm -f 00:04:34.579 15:29:56 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:35.514 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.514 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:04:35.773 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:04:35.773 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:35.773 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:35.773 15:29:57 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:35.773 15:29:57 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:35.773 15:29:57 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:35.773 15:29:57 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:35.773 15:29:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:35.773 15:29:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:35.773 15:29:57 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:35.773 15:29:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:35.773 15:29:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:35.773 15:29:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:35.773 15:29:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:35.773 15:29:57 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:35.773 15:29:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:35.773 15:29:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:35.773 15:29:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:35.773 15:29:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:04:35.773 15:29:57 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:04:35.773 15:29:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:35.773 15:29:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:35.773 15:29:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:35.773 15:29:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n2 00:04:35.773 15:29:57 -- common/autotest_common.sh@1647 -- # local device=nvme2n2 00:04:35.773 15:29:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:35.773 15:29:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:35.773 15:29:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:35.773 15:29:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n3 00:04:35.773 15:29:57 -- common/autotest_common.sh@1647 -- # local device=nvme2n3 00:04:35.773 15:29:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:35.773 15:29:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:35.773 15:29:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:35.773 15:29:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3c3n1 00:04:35.773 15:29:57 -- common/autotest_common.sh@1647 -- # local device=nvme3c3n1 00:04:35.773 15:29:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:35.773 15:29:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:35.773 15:29:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:35.773 15:29:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:04:35.773 15:29:57 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:04:35.773 15:29:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:35.773 15:29:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:35.773 15:29:57 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:35.773 15:29:57 -- spdk/autotest.sh@121 -- # grep -v p 00:04:35.773 15:29:57 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme2n2 /dev/nvme2n3 /dev/nvme3n1 00:04:35.773 15:29:57 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:35.773 15:29:57 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:35.773 15:29:57 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:35.773 15:29:57 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:35.773 15:29:57 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:35.773 No valid GPT data, bailing 00:04:35.773 15:29:57 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:35.773 15:29:57 -- scripts/common.sh@393 -- # pt= 00:04:35.773 15:29:57 -- scripts/common.sh@394 -- # return 1 00:04:35.773 15:29:57 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:35.773 1+0 records in 00:04:35.773 1+0 records out 00:04:35.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142906 s, 73.4 MB/s 00:04:35.773 15:29:57 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:35.773 15:29:57 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:35.773 15:29:57 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:04:35.773 15:29:57 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:35.773 15:29:57 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:35.773 No valid GPT data, bailing 00:04:36.032 15:29:57 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:36.032 15:29:57 -- scripts/common.sh@393 -- # pt= 00:04:36.032 15:29:57 -- scripts/common.sh@394 -- # return 1 00:04:36.032 15:29:57 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:36.032 1+0 records in 00:04:36.032 1+0 records out 00:04:36.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00392172 s, 267 MB/s 00:04:36.032 15:29:57 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:36.032 15:29:57 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:36.032 15:29:57 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n1 00:04:36.032 15:29:57 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:04:36.032 15:29:57 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:36.032 No valid GPT data, bailing 00:04:36.032 15:29:57 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:36.032 15:29:57 -- scripts/common.sh@393 -- # pt= 00:04:36.032 15:29:57 -- scripts/common.sh@394 -- # return 1 00:04:36.032 15:29:57 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:36.032 1+0 records in 00:04:36.032 1+0 records out 00:04:36.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00411358 s, 255 MB/s 00:04:36.032 15:29:57 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:36.032 15:29:57 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:36.032 15:29:57 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n2 00:04:36.032 15:29:57 -- scripts/common.sh@380 -- # local block=/dev/nvme2n2 pt 00:04:36.032 15:29:57 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:36.032 No valid GPT data, bailing 00:04:36.032 15:29:57 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:36.032 15:29:57 -- scripts/common.sh@393 -- # pt= 00:04:36.032 15:29:57 -- scripts/common.sh@394 -- # return 1 00:04:36.032 15:29:57 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:36.032 1+0 records in 00:04:36.032 1+0 records out 00:04:36.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00362548 s, 289 MB/s 00:04:36.032 15:29:57 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:36.032 15:29:57 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:36.032 15:29:57 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n3 00:04:36.032 15:29:57 -- scripts/common.sh@380 -- # local block=/dev/nvme2n3 pt 00:04:36.032 15:29:57 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:36.032 No valid GPT data, bailing 00:04:36.032 15:29:57 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:36.032 15:29:57 -- scripts/common.sh@393 -- # pt= 00:04:36.032 15:29:57 -- scripts/common.sh@394 -- # return 1 00:04:36.032 15:29:57 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:36.291 1+0 records in 00:04:36.291 1+0 records out 00:04:36.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00451774 s, 232 MB/s 00:04:36.291 15:29:57 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:36.291 15:29:57 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:36.291 15:29:57 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme3n1 00:04:36.291 15:29:57 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:04:36.291 15:29:57 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:36.291 No valid GPT data, bailing 00:04:36.291 15:29:57 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:36.291 15:29:57 -- scripts/common.sh@393 -- # pt= 00:04:36.291 15:29:57 -- scripts/common.sh@394 -- # return 1 00:04:36.291 15:29:57 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:36.291 1+0 records in 00:04:36.291 1+0 records out 00:04:36.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00555531 s, 189 MB/s 00:04:36.291 15:29:57 -- spdk/autotest.sh@129 -- # sync 00:04:36.291 15:29:57 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:36.291 15:29:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:36.291 15:29:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:38.199 15:29:59 -- spdk/autotest.sh@135 -- # uname -s 00:04:38.199 15:29:59 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:38.199 15:29:59 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:38.199 15:29:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:38.199 15:29:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:38.199 15:29:59 -- common/autotest_common.sh@10 -- # set +x 00:04:38.199 ************************************ 00:04:38.199 START TEST setup.sh 00:04:38.199 ************************************ 00:04:38.199 15:29:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:38.199 * Looking for test storage... 00:04:38.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:38.199 15:29:59 -- setup/test-setup.sh@10 -- # uname -s 00:04:38.199 15:29:59 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:38.199 15:29:59 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:38.199 15:29:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:38.199 15:29:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:38.199 15:29:59 -- common/autotest_common.sh@10 -- # set +x 00:04:38.199 ************************************ 00:04:38.199 START TEST acl 00:04:38.200 ************************************ 00:04:38.200 15:29:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:38.200 * Looking for test storage... 00:04:38.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:38.200 15:29:59 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:38.200 15:29:59 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:38.200 15:29:59 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:38.200 15:29:59 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:38.200 15:29:59 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:38.200 15:29:59 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:38.200 15:29:59 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:38.200 15:29:59 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:38.200 15:29:59 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:38.200 15:29:59 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:38.200 15:29:59 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:38.200 15:29:59 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:38.200 15:29:59 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:38.200 15:29:59 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:38.200 15:29:59 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:38.200 15:29:59 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:04:38.200 15:29:59 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:04:38.200 15:29:59 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:38.200 15:29:59 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:38.200 15:29:59 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:38.200 15:29:59 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n2 00:04:38.200 15:29:59 -- common/autotest_common.sh@1647 -- # local device=nvme2n2 00:04:38.200 15:29:59 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:38.200 15:29:59 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:38.200 15:29:59 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:38.200 15:29:59 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n3 00:04:38.200 15:29:59 -- common/autotest_common.sh@1647 -- # local device=nvme2n3 00:04:38.200 15:29:59 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:38.200 15:29:59 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:38.200 15:29:59 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:38.200 15:29:59 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3c3n1 00:04:38.200 15:29:59 -- common/autotest_common.sh@1647 -- # local device=nvme3c3n1 00:04:38.200 15:29:59 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:38.200 15:29:59 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:38.200 15:29:59 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:38.200 15:29:59 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:04:38.200 15:29:59 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:04:38.200 15:29:59 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:38.200 15:29:59 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:38.200 15:29:59 -- setup/acl.sh@12 -- # devs=() 00:04:38.200 15:29:59 -- setup/acl.sh@12 -- # declare -a devs 00:04:38.200 15:29:59 -- setup/acl.sh@13 -- # drivers=() 00:04:38.200 15:29:59 -- setup/acl.sh@13 -- # declare -A drivers 00:04:38.200 15:29:59 -- setup/acl.sh@51 -- # setup reset 00:04:38.200 15:29:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.200 15:29:59 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:39.575 15:30:00 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:39.575 15:30:00 -- setup/acl.sh@16 -- # local dev driver 00:04:39.575 15:30:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.575 15:30:00 -- setup/acl.sh@15 -- # setup output status 00:04:39.575 15:30:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.575 15:30:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:39.575 Hugepages 00:04:39.575 node hugesize free / total 00:04:39.575 15:30:01 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:39.575 15:30:01 -- setup/acl.sh@19 -- # continue 00:04:39.575 15:30:01 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.575 00:04:39.575 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:39.575 15:30:01 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:39.575 15:30:01 -- setup/acl.sh@19 -- # continue 00:04:39.575 15:30:01 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.575 15:30:01 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:39.575 15:30:01 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:39.575 15:30:01 -- setup/acl.sh@20 -- # continue 00:04:39.575 15:30:01 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.832 15:30:01 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:39.832 15:30:01 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:39.832 15:30:01 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:39.832 15:30:01 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:39.832 15:30:01 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:39.832 15:30:01 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.832 15:30:01 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:39.832 15:30:01 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:39.832 15:30:01 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:39.832 15:30:01 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:39.832 15:30:01 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:39.832 15:30:01 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.832 15:30:01 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:04:39.832 15:30:01 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:39.832 15:30:01 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:39.832 15:30:01 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:39.832 15:30:01 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:39.833 15:30:01 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.833 15:30:01 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:04:39.833 15:30:01 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:39.833 15:30:01 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:39.833 15:30:01 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:39.833 15:30:01 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:39.833 15:30:01 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.833 15:30:01 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:39.833 15:30:01 -- setup/acl.sh@54 -- # run_test denied denied 00:04:39.833 15:30:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.833 15:30:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.833 15:30:01 -- common/autotest_common.sh@10 -- # set +x 00:04:39.833 ************************************ 00:04:39.833 START TEST denied 00:04:39.833 ************************************ 00:04:39.833 15:30:01 -- common/autotest_common.sh@1104 -- # denied 00:04:39.833 15:30:01 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:39.833 15:30:01 -- setup/acl.sh@38 -- # setup output config 00:04:39.833 15:30:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.833 15:30:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:39.833 15:30:01 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:41.205 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:41.205 15:30:02 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:41.205 15:30:02 -- setup/acl.sh@28 -- # local dev driver 00:04:41.205 15:30:02 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:41.205 15:30:02 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:41.205 15:30:02 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:41.205 15:30:02 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:41.205 15:30:02 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:41.205 15:30:02 -- setup/acl.sh@41 -- # setup reset 00:04:41.205 15:30:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.206 15:30:02 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:47.769 00:04:47.769 real 0m7.214s 00:04:47.769 user 0m0.860s 00:04:47.769 sys 0m1.408s 00:04:47.769 15:30:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.769 15:30:08 -- common/autotest_common.sh@10 -- # set +x 00:04:47.769 ************************************ 00:04:47.769 END TEST denied 00:04:47.769 ************************************ 00:04:47.769 15:30:08 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:47.769 15:30:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:47.769 15:30:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:47.769 15:30:08 -- common/autotest_common.sh@10 -- # set +x 00:04:47.769 ************************************ 00:04:47.769 START TEST allowed 00:04:47.769 ************************************ 00:04:47.769 15:30:08 -- common/autotest_common.sh@1104 -- # allowed 00:04:47.769 15:30:08 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:47.769 15:30:08 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:47.769 15:30:08 -- setup/acl.sh@45 -- # setup output config 00:04:47.769 15:30:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.769 15:30:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:48.336 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:48.336 15:30:09 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:48.336 15:30:09 -- setup/acl.sh@28 -- # local dev driver 00:04:48.336 15:30:09 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:48.336 15:30:09 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:48.336 15:30:09 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:48.336 15:30:09 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:48.336 15:30:09 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:48.336 15:30:09 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:48.336 15:30:09 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:04:48.336 15:30:09 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:04:48.336 15:30:09 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:48.336 15:30:09 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:48.336 15:30:09 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:48.336 15:30:09 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:04:48.336 15:30:09 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:04:48.336 15:30:09 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:48.336 15:30:09 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:48.336 15:30:09 -- setup/acl.sh@48 -- # setup reset 00:04:48.336 15:30:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:48.336 15:30:09 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.715 00:04:49.715 real 0m2.305s 00:04:49.715 user 0m1.038s 00:04:49.715 sys 0m1.243s 00:04:49.715 15:30:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.715 15:30:10 -- common/autotest_common.sh@10 -- # set +x 00:04:49.715 ************************************ 00:04:49.715 END TEST allowed 00:04:49.715 ************************************ 00:04:49.715 ************************************ 00:04:49.715 END TEST acl 00:04:49.715 ************************************ 00:04:49.715 00:04:49.715 real 0m11.388s 00:04:49.715 user 0m2.766s 00:04:49.715 sys 0m3.686s 00:04:49.715 15:30:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.715 15:30:11 -- common/autotest_common.sh@10 -- # set +x 00:04:49.715 15:30:11 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:49.715 15:30:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.715 15:30:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.715 15:30:11 -- common/autotest_common.sh@10 -- # set +x 00:04:49.715 ************************************ 00:04:49.715 START TEST hugepages 00:04:49.715 ************************************ 00:04:49.715 15:30:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:49.715 * Looking for test storage... 00:04:49.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:49.715 15:30:11 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:49.715 15:30:11 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:49.715 15:30:11 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:49.715 15:30:11 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:49.715 15:30:11 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:49.715 15:30:11 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:49.715 15:30:11 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:49.715 15:30:11 -- setup/common.sh@18 -- # local node= 00:04:49.715 15:30:11 -- setup/common.sh@19 -- # local var val 00:04:49.715 15:30:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.715 15:30:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.715 15:30:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.715 15:30:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.715 15:30:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.715 15:30:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.715 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.715 15:30:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5868224 kB' 'MemAvailable: 7408460 kB' 'Buffers: 2436 kB' 'Cached: 1754196 kB' 'SwapCached: 0 kB' 'Active: 442900 kB' 'Inactive: 1414120 kB' 'Active(anon): 110900 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 102012 kB' 'Mapped: 48632 kB' 'Shmem: 10512 kB' 'KReclaimable: 62560 kB' 'Slab: 134572 kB' 'SReclaimable: 62560 kB' 'SUnreclaim: 72012 kB' 'KernelStack: 6288 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 325976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:49.715 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.715 15:30:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.715 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.715 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.715 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.715 15:30:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.715 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.715 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.715 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.715 15:30:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.715 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.715 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.715 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.715 15:30:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.715 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.715 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.715 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.715 15:30:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.715 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.715 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.715 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.715 15:30:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.715 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.715 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.715 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.715 15:30:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.715 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.715 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.716 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.716 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # continue 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.717 15:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.717 15:30:11 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.717 15:30:11 -- setup/common.sh@33 -- # echo 2048 00:04:49.717 15:30:11 -- setup/common.sh@33 -- # return 0 00:04:49.717 15:30:11 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:49.717 15:30:11 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:49.717 15:30:11 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:49.717 15:30:11 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:49.717 15:30:11 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:49.717 15:30:11 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:49.717 15:30:11 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:49.717 15:30:11 -- setup/hugepages.sh@207 -- # get_nodes 00:04:49.717 15:30:11 -- setup/hugepages.sh@27 -- # local node 00:04:49.717 15:30:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.717 15:30:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:49.717 15:30:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:49.717 15:30:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.717 15:30:11 -- setup/hugepages.sh@208 -- # clear_hp 00:04:49.717 15:30:11 -- setup/hugepages.sh@37 -- # local node hp 00:04:49.717 15:30:11 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:49.717 15:30:11 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.717 15:30:11 -- setup/hugepages.sh@41 -- # echo 0 00:04:49.717 15:30:11 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.717 15:30:11 -- setup/hugepages.sh@41 -- # echo 0 00:04:49.717 15:30:11 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:49.717 15:30:11 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:49.717 15:30:11 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:49.717 15:30:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.717 15:30:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.717 15:30:11 -- common/autotest_common.sh@10 -- # set +x 00:04:49.717 ************************************ 00:04:49.717 START TEST default_setup 00:04:49.717 ************************************ 00:04:49.717 15:30:11 -- common/autotest_common.sh@1104 -- # default_setup 00:04:49.717 15:30:11 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:49.717 15:30:11 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:49.717 15:30:11 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:49.717 15:30:11 -- setup/hugepages.sh@51 -- # shift 00:04:49.717 15:30:11 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:49.717 15:30:11 -- setup/hugepages.sh@52 -- # local node_ids 00:04:49.717 15:30:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.717 15:30:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:49.717 15:30:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:49.717 15:30:11 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:49.717 15:30:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.717 15:30:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:49.717 15:30:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:49.717 15:30:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.717 15:30:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.717 15:30:11 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:49.717 15:30:11 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:49.717 15:30:11 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:49.717 15:30:11 -- setup/hugepages.sh@73 -- # return 0 00:04:49.717 15:30:11 -- setup/hugepages.sh@137 -- # setup output 00:04:49.717 15:30:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.717 15:30:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:50.653 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:50.913 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:50.913 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:50.913 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:50.913 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:50.913 15:30:12 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:50.913 15:30:12 -- setup/hugepages.sh@89 -- # local node 00:04:50.913 15:30:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:50.913 15:30:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:50.913 15:30:12 -- setup/hugepages.sh@92 -- # local surp 00:04:50.913 15:30:12 -- setup/hugepages.sh@93 -- # local resv 00:04:50.913 15:30:12 -- setup/hugepages.sh@94 -- # local anon 00:04:50.913 15:30:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:50.913 15:30:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:50.913 15:30:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:50.913 15:30:12 -- setup/common.sh@18 -- # local node= 00:04:50.913 15:30:12 -- setup/common.sh@19 -- # local var val 00:04:50.913 15:30:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.913 15:30:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.913 15:30:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.913 15:30:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.913 15:30:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.913 15:30:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.913 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7948172 kB' 'MemAvailable: 9488192 kB' 'Buffers: 2436 kB' 'Cached: 1754188 kB' 'SwapCached: 0 kB' 'Active: 459540 kB' 'Inactive: 1414136 kB' 'Active(anon): 127540 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118708 kB' 'Mapped: 48760 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 133900 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71804 kB' 'KernelStack: 6304 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.914 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.914 15:30:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # continue 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.915 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.915 15:30:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.915 15:30:12 -- setup/common.sh@33 -- # echo 0 00:04:50.915 15:30:12 -- setup/common.sh@33 -- # return 0 00:04:50.915 15:30:12 -- setup/hugepages.sh@97 -- # anon=0 00:04:50.915 15:30:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:50.915 15:30:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.915 15:30:12 -- setup/common.sh@18 -- # local node= 00:04:50.915 15:30:12 -- setup/common.sh@19 -- # local var val 00:04:50.916 15:30:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.916 15:30:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.916 15:30:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.916 15:30:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.916 15:30:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.916 15:30:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.178 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.178 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.178 15:30:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7948676 kB' 'MemAvailable: 9488696 kB' 'Buffers: 2436 kB' 'Cached: 1754188 kB' 'SwapCached: 0 kB' 'Active: 459380 kB' 'Inactive: 1414136 kB' 'Active(anon): 127380 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118500 kB' 'Mapped: 48760 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 133872 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71776 kB' 'KernelStack: 6240 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:51.178 15:30:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.178 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.178 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.178 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.179 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.179 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.180 15:30:12 -- setup/common.sh@33 -- # echo 0 00:04:51.180 15:30:12 -- setup/common.sh@33 -- # return 0 00:04:51.180 15:30:12 -- setup/hugepages.sh@99 -- # surp=0 00:04:51.180 15:30:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:51.180 15:30:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:51.180 15:30:12 -- setup/common.sh@18 -- # local node= 00:04:51.180 15:30:12 -- setup/common.sh@19 -- # local var val 00:04:51.180 15:30:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.180 15:30:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.180 15:30:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.180 15:30:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.180 15:30:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.180 15:30:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7948424 kB' 'MemAvailable: 9488460 kB' 'Buffers: 2436 kB' 'Cached: 1754188 kB' 'SwapCached: 0 kB' 'Active: 459376 kB' 'Inactive: 1414152 kB' 'Active(anon): 127376 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118512 kB' 'Mapped: 48632 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 133876 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71780 kB' 'KernelStack: 6256 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.180 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.180 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.181 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.181 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.182 15:30:12 -- setup/common.sh@33 -- # echo 0 00:04:51.182 15:30:12 -- setup/common.sh@33 -- # return 0 00:04:51.182 15:30:12 -- setup/hugepages.sh@100 -- # resv=0 00:04:51.182 nr_hugepages=1024 00:04:51.182 15:30:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:51.182 resv_hugepages=0 00:04:51.182 15:30:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:51.182 surplus_hugepages=0 00:04:51.182 15:30:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:51.182 anon_hugepages=0 00:04:51.182 15:30:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:51.182 15:30:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.182 15:30:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:51.182 15:30:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:51.182 15:30:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:51.182 15:30:12 -- setup/common.sh@18 -- # local node= 00:04:51.182 15:30:12 -- setup/common.sh@19 -- # local var val 00:04:51.182 15:30:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.182 15:30:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.182 15:30:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.182 15:30:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.182 15:30:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.182 15:30:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7948424 kB' 'MemAvailable: 9488460 kB' 'Buffers: 2436 kB' 'Cached: 1754188 kB' 'SwapCached: 0 kB' 'Active: 459068 kB' 'Inactive: 1414152 kB' 'Active(anon): 127068 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118432 kB' 'Mapped: 48632 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 133872 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71776 kB' 'KernelStack: 6224 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.182 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.182 15:30:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.183 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.183 15:30:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.183 15:30:12 -- setup/common.sh@33 -- # echo 1024 00:04:51.183 15:30:12 -- setup/common.sh@33 -- # return 0 00:04:51.183 15:30:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.183 15:30:12 -- setup/hugepages.sh@112 -- # get_nodes 00:04:51.183 15:30:12 -- setup/hugepages.sh@27 -- # local node 00:04:51.183 15:30:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.183 15:30:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:51.183 15:30:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:51.183 15:30:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.183 15:30:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.184 15:30:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.184 15:30:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:51.184 15:30:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.184 15:30:12 -- setup/common.sh@18 -- # local node=0 00:04:51.184 15:30:12 -- setup/common.sh@19 -- # local var val 00:04:51.184 15:30:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.184 15:30:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.184 15:30:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:51.184 15:30:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:51.184 15:30:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.184 15:30:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7948424 kB' 'MemUsed: 4293548 kB' 'SwapCached: 0 kB' 'Active: 459120 kB' 'Inactive: 1414152 kB' 'Active(anon): 127120 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1756624 kB' 'Mapped: 48632 kB' 'AnonPages: 118492 kB' 'Shmem: 10472 kB' 'KernelStack: 6240 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62096 kB' 'Slab: 133868 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.184 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.184 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # continue 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.185 15:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.185 15:30:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.185 15:30:12 -- setup/common.sh@33 -- # echo 0 00:04:51.185 15:30:12 -- setup/common.sh@33 -- # return 0 00:04:51.185 15:30:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.185 15:30:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.185 15:30:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.185 15:30:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.185 node0=1024 expecting 1024 00:04:51.185 15:30:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:51.185 15:30:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:51.185 00:04:51.185 real 0m1.453s 00:04:51.185 user 0m0.658s 00:04:51.185 sys 0m0.762s 00:04:51.185 15:30:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.185 15:30:12 -- common/autotest_common.sh@10 -- # set +x 00:04:51.185 ************************************ 00:04:51.185 END TEST default_setup 00:04:51.185 ************************************ 00:04:51.185 15:30:12 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:51.185 15:30:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:51.185 15:30:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.185 15:30:12 -- common/autotest_common.sh@10 -- # set +x 00:04:51.185 ************************************ 00:04:51.185 START TEST per_node_1G_alloc 00:04:51.185 ************************************ 00:04:51.185 15:30:12 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:51.185 15:30:12 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:51.185 15:30:12 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:51.185 15:30:12 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:51.185 15:30:12 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:51.185 15:30:12 -- setup/hugepages.sh@51 -- # shift 00:04:51.185 15:30:12 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:51.185 15:30:12 -- setup/hugepages.sh@52 -- # local node_ids 00:04:51.185 15:30:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:51.185 15:30:12 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:51.185 15:30:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:51.185 15:30:12 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:51.185 15:30:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:51.185 15:30:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:51.185 15:30:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:51.185 15:30:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:51.185 15:30:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:51.185 15:30:12 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:51.185 15:30:12 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:51.185 15:30:12 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:51.185 15:30:12 -- setup/hugepages.sh@73 -- # return 0 00:04:51.185 15:30:12 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:51.185 15:30:12 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:51.185 15:30:12 -- setup/hugepages.sh@146 -- # setup output 00:04:51.185 15:30:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.185 15:30:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:51.754 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:51.754 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:51.754 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:51.754 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:51.754 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:51.755 15:30:13 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:51.755 15:30:13 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:51.755 15:30:13 -- setup/hugepages.sh@89 -- # local node 00:04:51.755 15:30:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:51.755 15:30:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:51.755 15:30:13 -- setup/hugepages.sh@92 -- # local surp 00:04:51.755 15:30:13 -- setup/hugepages.sh@93 -- # local resv 00:04:51.755 15:30:13 -- setup/hugepages.sh@94 -- # local anon 00:04:51.755 15:30:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:51.755 15:30:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:51.755 15:30:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:51.755 15:30:13 -- setup/common.sh@18 -- # local node= 00:04:51.755 15:30:13 -- setup/common.sh@19 -- # local var val 00:04:51.755 15:30:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.755 15:30:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.755 15:30:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.755 15:30:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.755 15:30:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.755 15:30:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.755 15:30:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8997600 kB' 'MemAvailable: 10537636 kB' 'Buffers: 2436 kB' 'Cached: 1754188 kB' 'SwapCached: 0 kB' 'Active: 460028 kB' 'Inactive: 1414152 kB' 'Active(anon): 128028 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118948 kB' 'Mapped: 48796 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 133864 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71768 kB' 'KernelStack: 6316 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 344492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.755 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.755 15:30:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.756 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.756 15:30:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.756 15:30:13 -- setup/common.sh@33 -- # echo 0 00:04:51.756 15:30:13 -- setup/common.sh@33 -- # return 0 00:04:51.756 15:30:13 -- setup/hugepages.sh@97 -- # anon=0 00:04:51.757 15:30:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:51.757 15:30:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.757 15:30:13 -- setup/common.sh@18 -- # local node= 00:04:51.757 15:30:13 -- setup/common.sh@19 -- # local var val 00:04:51.757 15:30:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.757 15:30:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.757 15:30:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.757 15:30:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.757 15:30:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.757 15:30:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.757 15:30:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8997600 kB' 'MemAvailable: 10537636 kB' 'Buffers: 2436 kB' 'Cached: 1754188 kB' 'SwapCached: 0 kB' 'Active: 459452 kB' 'Inactive: 1414152 kB' 'Active(anon): 127452 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118636 kB' 'Mapped: 48688 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 133872 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71776 kB' 'KernelStack: 6224 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 344492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.757 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.757 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.758 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.758 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.759 15:30:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.759 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.759 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.759 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.759 15:30:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.759 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.759 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.759 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.759 15:30:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.759 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.759 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.759 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.759 15:30:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.759 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.759 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.759 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.759 15:30:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.759 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.759 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.759 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.759 15:30:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.759 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.759 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.759 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.759 15:30:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.759 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.759 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.759 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.759 15:30:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.759 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.759 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.759 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.759 15:30:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.759 15:30:13 -- setup/common.sh@32 -- # continue 00:04:51.759 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.759 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.759 15:30:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.759 15:30:13 -- setup/common.sh@33 -- # echo 0 00:04:51.759 15:30:13 -- setup/common.sh@33 -- # return 0 00:04:51.759 15:30:13 -- setup/hugepages.sh@99 -- # surp=0 00:04:51.759 15:30:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:51.759 15:30:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:51.759 15:30:13 -- setup/common.sh@18 -- # local node= 00:04:51.759 15:30:13 -- setup/common.sh@19 -- # local var val 00:04:51.759 15:30:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.759 15:30:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.759 15:30:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.759 15:30:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.759 15:30:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.020 15:30:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.020 15:30:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8998832 kB' 'MemAvailable: 10538868 kB' 'Buffers: 2436 kB' 'Cached: 1754188 kB' 'SwapCached: 0 kB' 'Active: 459344 kB' 'Inactive: 1414152 kB' 'Active(anon): 127344 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118564 kB' 'Mapped: 48632 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 133884 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71788 kB' 'KernelStack: 6256 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 344492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.020 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.020 15:30:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.021 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.021 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.022 15:30:13 -- setup/common.sh@33 -- # echo 0 00:04:52.022 15:30:13 -- setup/common.sh@33 -- # return 0 00:04:52.022 15:30:13 -- setup/hugepages.sh@100 -- # resv=0 00:04:52.022 nr_hugepages=512 00:04:52.022 15:30:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:52.022 resv_hugepages=0 00:04:52.022 15:30:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.022 surplus_hugepages=0 00:04:52.022 15:30:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.022 anon_hugepages=0 00:04:52.022 15:30:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.022 15:30:13 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:52.022 15:30:13 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:52.022 15:30:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.022 15:30:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.022 15:30:13 -- setup/common.sh@18 -- # local node= 00:04:52.022 15:30:13 -- setup/common.sh@19 -- # local var val 00:04:52.022 15:30:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.022 15:30:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.022 15:30:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.022 15:30:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.022 15:30:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.022 15:30:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8998832 kB' 'MemAvailable: 10538868 kB' 'Buffers: 2436 kB' 'Cached: 1754188 kB' 'SwapCached: 0 kB' 'Active: 459024 kB' 'Inactive: 1414152 kB' 'Active(anon): 127024 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118512 kB' 'Mapped: 48632 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 133880 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71784 kB' 'KernelStack: 6240 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 344492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.022 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.022 15:30:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.023 15:30:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.023 15:30:13 -- setup/common.sh@33 -- # echo 512 00:04:52.023 15:30:13 -- setup/common.sh@33 -- # return 0 00:04:52.023 15:30:13 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:52.023 15:30:13 -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.023 15:30:13 -- setup/hugepages.sh@27 -- # local node 00:04:52.023 15:30:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.023 15:30:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:52.023 15:30:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:52.023 15:30:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.023 15:30:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.023 15:30:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.023 15:30:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.023 15:30:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.023 15:30:13 -- setup/common.sh@18 -- # local node=0 00:04:52.023 15:30:13 -- setup/common.sh@19 -- # local var val 00:04:52.023 15:30:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.023 15:30:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.023 15:30:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.023 15:30:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.023 15:30:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.023 15:30:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.023 15:30:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9003252 kB' 'MemUsed: 3238720 kB' 'SwapCached: 0 kB' 'Active: 459156 kB' 'Inactive: 1414152 kB' 'Active(anon): 127156 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1756624 kB' 'Mapped: 48632 kB' 'AnonPages: 118616 kB' 'Shmem: 10472 kB' 'KernelStack: 6240 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62096 kB' 'Slab: 133880 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:52.023 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.024 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.024 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.025 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 15:30:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.025 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.025 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 15:30:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.025 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.025 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 15:30:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.025 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.025 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 15:30:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.025 15:30:13 -- setup/common.sh@32 -- # continue 00:04:52.025 15:30:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.025 15:30:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.025 15:30:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.025 15:30:13 -- setup/common.sh@33 -- # echo 0 00:04:52.025 15:30:13 -- setup/common.sh@33 -- # return 0 00:04:52.025 15:30:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.025 15:30:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.025 15:30:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.025 15:30:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.025 node0=512 expecting 512 00:04:52.025 15:30:13 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:52.025 15:30:13 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:52.025 00:04:52.025 real 0m0.744s 00:04:52.025 user 0m0.347s 00:04:52.025 sys 0m0.439s 00:04:52.025 15:30:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.025 15:30:13 -- common/autotest_common.sh@10 -- # set +x 00:04:52.025 ************************************ 00:04:52.025 END TEST per_node_1G_alloc 00:04:52.025 ************************************ 00:04:52.025 15:30:13 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:52.025 15:30:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:52.025 15:30:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:52.025 15:30:13 -- common/autotest_common.sh@10 -- # set +x 00:04:52.025 ************************************ 00:04:52.025 START TEST even_2G_alloc 00:04:52.025 ************************************ 00:04:52.025 15:30:13 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:52.025 15:30:13 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:52.025 15:30:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:52.025 15:30:13 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:52.025 15:30:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.025 15:30:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:52.025 15:30:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:52.025 15:30:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.025 15:30:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.025 15:30:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:52.025 15:30:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:52.025 15:30:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.025 15:30:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.025 15:30:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.025 15:30:13 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:52.025 15:30:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.025 15:30:13 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:52.025 15:30:13 -- setup/hugepages.sh@83 -- # : 0 00:04:52.025 15:30:13 -- setup/hugepages.sh@84 -- # : 0 00:04:52.025 15:30:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.025 15:30:13 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:52.025 15:30:13 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:52.025 15:30:13 -- setup/hugepages.sh@153 -- # setup output 00:04:52.025 15:30:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.025 15:30:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:52.598 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:52.598 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:52.598 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:52.598 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:52.598 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:52.598 15:30:14 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:52.598 15:30:14 -- setup/hugepages.sh@89 -- # local node 00:04:52.598 15:30:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.598 15:30:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.598 15:30:14 -- setup/hugepages.sh@92 -- # local surp 00:04:52.598 15:30:14 -- setup/hugepages.sh@93 -- # local resv 00:04:52.598 15:30:14 -- setup/hugepages.sh@94 -- # local anon 00:04:52.598 15:30:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.598 15:30:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.598 15:30:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.598 15:30:14 -- setup/common.sh@18 -- # local node= 00:04:52.598 15:30:14 -- setup/common.sh@19 -- # local var val 00:04:52.598 15:30:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.598 15:30:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.598 15:30:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.598 15:30:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.598 15:30:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.598 15:30:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.598 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.598 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.598 15:30:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7986800 kB' 'MemAvailable: 9526836 kB' 'Buffers: 2436 kB' 'Cached: 1754188 kB' 'SwapCached: 0 kB' 'Active: 459976 kB' 'Inactive: 1414152 kB' 'Active(anon): 127976 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118912 kB' 'Mapped: 48964 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 134020 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71924 kB' 'KernelStack: 6392 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:52.598 15:30:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.598 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.598 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.598 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.598 15:30:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.598 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.598 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.598 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.598 15:30:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.598 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.598 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.598 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.598 15:30:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.598 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.598 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.598 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.598 15:30:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.598 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.598 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.598 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.598 15:30:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.598 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.598 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.599 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.599 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.600 15:30:14 -- setup/common.sh@33 -- # echo 0 00:04:52.600 15:30:14 -- setup/common.sh@33 -- # return 0 00:04:52.600 15:30:14 -- setup/hugepages.sh@97 -- # anon=0 00:04:52.600 15:30:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.600 15:30:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.600 15:30:14 -- setup/common.sh@18 -- # local node= 00:04:52.600 15:30:14 -- setup/common.sh@19 -- # local var val 00:04:52.600 15:30:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.600 15:30:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.600 15:30:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.600 15:30:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.600 15:30:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.600 15:30:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7986800 kB' 'MemAvailable: 9526836 kB' 'Buffers: 2436 kB' 'Cached: 1754188 kB' 'SwapCached: 0 kB' 'Active: 459580 kB' 'Inactive: 1414152 kB' 'Active(anon): 127580 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118680 kB' 'Mapped: 48904 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 134036 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71940 kB' 'KernelStack: 6316 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.600 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.600 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.601 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.601 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.602 15:30:14 -- setup/common.sh@33 -- # echo 0 00:04:52.602 15:30:14 -- setup/common.sh@33 -- # return 0 00:04:52.602 15:30:14 -- setup/hugepages.sh@99 -- # surp=0 00:04:52.602 15:30:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.602 15:30:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.602 15:30:14 -- setup/common.sh@18 -- # local node= 00:04:52.602 15:30:14 -- setup/common.sh@19 -- # local var val 00:04:52.602 15:30:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.602 15:30:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.602 15:30:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.602 15:30:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.602 15:30:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.602 15:30:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7986296 kB' 'MemAvailable: 9526332 kB' 'Buffers: 2436 kB' 'Cached: 1754188 kB' 'SwapCached: 0 kB' 'Active: 459492 kB' 'Inactive: 1414152 kB' 'Active(anon): 127492 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118624 kB' 'Mapped: 48776 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 134020 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71924 kB' 'KernelStack: 6300 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.602 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.602 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.603 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.603 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.603 15:30:14 -- setup/common.sh@33 -- # echo 0 00:04:52.603 15:30:14 -- setup/common.sh@33 -- # return 0 00:04:52.603 15:30:14 -- setup/hugepages.sh@100 -- # resv=0 00:04:52.603 nr_hugepages=1024 00:04:52.603 15:30:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.603 resv_hugepages=0 00:04:52.603 15:30:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.603 surplus_hugepages=0 00:04:52.603 15:30:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.603 anon_hugepages=0 00:04:52.603 15:30:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.603 15:30:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.603 15:30:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.603 15:30:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.603 15:30:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.603 15:30:14 -- setup/common.sh@18 -- # local node= 00:04:52.603 15:30:14 -- setup/common.sh@19 -- # local var val 00:04:52.603 15:30:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.603 15:30:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.603 15:30:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.603 15:30:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.603 15:30:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.603 15:30:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.604 15:30:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7986296 kB' 'MemAvailable: 9526332 kB' 'Buffers: 2436 kB' 'Cached: 1754188 kB' 'SwapCached: 0 kB' 'Active: 459476 kB' 'Inactive: 1414152 kB' 'Active(anon): 127476 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118344 kB' 'Mapped: 48776 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 134016 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71920 kB' 'KernelStack: 6268 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.604 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.604 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.605 15:30:14 -- setup/common.sh@33 -- # echo 1024 00:04:52.605 15:30:14 -- setup/common.sh@33 -- # return 0 00:04:52.605 15:30:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.605 15:30:14 -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.605 15:30:14 -- setup/hugepages.sh@27 -- # local node 00:04:52.605 15:30:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.605 15:30:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:52.605 15:30:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:52.605 15:30:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.605 15:30:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.605 15:30:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.605 15:30:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.605 15:30:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.605 15:30:14 -- setup/common.sh@18 -- # local node=0 00:04:52.605 15:30:14 -- setup/common.sh@19 -- # local var val 00:04:52.605 15:30:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.605 15:30:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.605 15:30:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.605 15:30:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.605 15:30:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.605 15:30:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.605 15:30:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7986296 kB' 'MemUsed: 4255676 kB' 'SwapCached: 0 kB' 'Active: 459560 kB' 'Inactive: 1414152 kB' 'Active(anon): 127560 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1756624 kB' 'Mapped: 48776 kB' 'AnonPages: 118688 kB' 'Shmem: 10472 kB' 'KernelStack: 6284 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62096 kB' 'Slab: 134016 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.605 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.605 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.606 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.606 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.607 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.607 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.607 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.607 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.607 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.607 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.607 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.607 15:30:14 -- setup/common.sh@32 -- # continue 00:04:52.607 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.607 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.607 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.607 15:30:14 -- setup/common.sh@33 -- # echo 0 00:04:52.607 15:30:14 -- setup/common.sh@33 -- # return 0 00:04:52.607 15:30:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.607 15:30:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.607 15:30:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.607 15:30:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.607 15:30:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:52.607 node0=1024 expecting 1024 00:04:52.607 15:30:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:52.607 00:04:52.607 real 0m0.693s 00:04:52.607 user 0m0.347s 00:04:52.607 sys 0m0.394s 00:04:52.607 15:30:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.607 15:30:14 -- common/autotest_common.sh@10 -- # set +x 00:04:52.607 ************************************ 00:04:52.607 END TEST even_2G_alloc 00:04:52.607 ************************************ 00:04:52.881 15:30:14 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:52.881 15:30:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:52.881 15:30:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:52.881 15:30:14 -- common/autotest_common.sh@10 -- # set +x 00:04:52.881 ************************************ 00:04:52.881 START TEST odd_alloc 00:04:52.881 ************************************ 00:04:52.881 15:30:14 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:52.881 15:30:14 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:52.881 15:30:14 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:52.881 15:30:14 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:52.881 15:30:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.881 15:30:14 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:52.881 15:30:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:52.881 15:30:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.881 15:30:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.881 15:30:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:52.881 15:30:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:52.881 15:30:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.881 15:30:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.881 15:30:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.881 15:30:14 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:52.881 15:30:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.881 15:30:14 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:52.881 15:30:14 -- setup/hugepages.sh@83 -- # : 0 00:04:52.881 15:30:14 -- setup/hugepages.sh@84 -- # : 0 00:04:52.881 15:30:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.881 15:30:14 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:52.881 15:30:14 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:52.881 15:30:14 -- setup/hugepages.sh@160 -- # setup output 00:04:52.881 15:30:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.881 15:30:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:53.142 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.404 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.404 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.404 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.404 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.404 15:30:14 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:53.404 15:30:14 -- setup/hugepages.sh@89 -- # local node 00:04:53.404 15:30:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.404 15:30:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.404 15:30:14 -- setup/hugepages.sh@92 -- # local surp 00:04:53.404 15:30:14 -- setup/hugepages.sh@93 -- # local resv 00:04:53.404 15:30:14 -- setup/hugepages.sh@94 -- # local anon 00:04:53.404 15:30:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.404 15:30:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.404 15:30:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.404 15:30:14 -- setup/common.sh@18 -- # local node= 00:04:53.404 15:30:14 -- setup/common.sh@19 -- # local var val 00:04:53.404 15:30:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.404 15:30:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.404 15:30:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.404 15:30:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.404 15:30:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.404 15:30:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.404 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 15:30:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7975540 kB' 'MemAvailable: 9515576 kB' 'Buffers: 2436 kB' 'Cached: 1754188 kB' 'SwapCached: 0 kB' 'Active: 459388 kB' 'Inactive: 1414152 kB' 'Active(anon): 127388 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118764 kB' 'Mapped: 48696 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 133960 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71864 kB' 'KernelStack: 6184 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 344624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:53.404 15:30:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.404 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.404 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 15:30:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.404 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.404 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 15:30:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.404 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.404 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 15:30:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.404 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.404 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 15:30:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.404 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.404 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 15:30:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.404 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.404 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 15:30:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.404 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.404 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.404 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.404 15:30:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.404 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.405 15:30:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.405 15:30:14 -- setup/common.sh@33 -- # echo 0 00:04:53.405 15:30:14 -- setup/common.sh@33 -- # return 0 00:04:53.405 15:30:14 -- setup/hugepages.sh@97 -- # anon=0 00:04:53.405 15:30:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.405 15:30:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.405 15:30:14 -- setup/common.sh@18 -- # local node= 00:04:53.405 15:30:14 -- setup/common.sh@19 -- # local var val 00:04:53.405 15:30:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.405 15:30:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.405 15:30:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.405 15:30:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.405 15:30:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.405 15:30:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.405 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7975540 kB' 'MemAvailable: 9515576 kB' 'Buffers: 2436 kB' 'Cached: 1754188 kB' 'SwapCached: 0 kB' 'Active: 459188 kB' 'Inactive: 1414152 kB' 'Active(anon): 127188 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118500 kB' 'Mapped: 48636 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 134020 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71924 kB' 'KernelStack: 6224 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 344624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.406 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.406 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.407 15:30:14 -- setup/common.sh@33 -- # echo 0 00:04:53.407 15:30:14 -- setup/common.sh@33 -- # return 0 00:04:53.407 15:30:14 -- setup/hugepages.sh@99 -- # surp=0 00:04:53.407 15:30:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.407 15:30:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.407 15:30:14 -- setup/common.sh@18 -- # local node= 00:04:53.407 15:30:14 -- setup/common.sh@19 -- # local var val 00:04:53.407 15:30:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.407 15:30:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.407 15:30:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.407 15:30:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.407 15:30:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.407 15:30:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7975648 kB' 'MemAvailable: 9515684 kB' 'Buffers: 2436 kB' 'Cached: 1754188 kB' 'SwapCached: 0 kB' 'Active: 459184 kB' 'Inactive: 1414152 kB' 'Active(anon): 127184 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118576 kB' 'Mapped: 48636 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 134012 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71916 kB' 'KernelStack: 6256 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 344624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.407 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.407 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.408 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.408 15:30:14 -- setup/common.sh@33 -- # echo 0 00:04:53.408 15:30:14 -- setup/common.sh@33 -- # return 0 00:04:53.408 15:30:14 -- setup/hugepages.sh@100 -- # resv=0 00:04:53.408 nr_hugepages=1025 00:04:53.408 15:30:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:53.408 resv_hugepages=0 00:04:53.408 15:30:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.408 surplus_hugepages=0 00:04:53.408 15:30:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.408 anon_hugepages=0 00:04:53.408 15:30:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.408 15:30:14 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:53.408 15:30:14 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:53.408 15:30:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.408 15:30:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.408 15:30:14 -- setup/common.sh@18 -- # local node= 00:04:53.408 15:30:14 -- setup/common.sh@19 -- # local var val 00:04:53.408 15:30:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.408 15:30:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.408 15:30:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.408 15:30:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.408 15:30:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.408 15:30:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.408 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7975648 kB' 'MemAvailable: 9515684 kB' 'Buffers: 2436 kB' 'Cached: 1754188 kB' 'SwapCached: 0 kB' 'Active: 459172 kB' 'Inactive: 1414152 kB' 'Active(anon): 127172 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118576 kB' 'Mapped: 48636 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 134008 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71912 kB' 'KernelStack: 6256 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 344624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.409 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.409 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.410 15:30:14 -- setup/common.sh@33 -- # echo 1025 00:04:53.410 15:30:14 -- setup/common.sh@33 -- # return 0 00:04:53.410 15:30:14 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:53.410 15:30:14 -- setup/hugepages.sh@112 -- # get_nodes 00:04:53.410 15:30:14 -- setup/hugepages.sh@27 -- # local node 00:04:53.410 15:30:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.410 15:30:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:53.410 15:30:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:53.410 15:30:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.410 15:30:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.410 15:30:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.410 15:30:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:53.410 15:30:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.410 15:30:14 -- setup/common.sh@18 -- # local node=0 00:04:53.410 15:30:14 -- setup/common.sh@19 -- # local var val 00:04:53.410 15:30:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.410 15:30:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.410 15:30:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.410 15:30:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.410 15:30:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.410 15:30:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7975648 kB' 'MemUsed: 4266324 kB' 'SwapCached: 0 kB' 'Active: 459164 kB' 'Inactive: 1414152 kB' 'Active(anon): 127164 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1756624 kB' 'Mapped: 48636 kB' 'AnonPages: 118572 kB' 'Shmem: 10472 kB' 'KernelStack: 6256 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62096 kB' 'Slab: 134004 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.410 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.410 15:30:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # continue 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.411 15:30:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.411 15:30:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.411 15:30:14 -- setup/common.sh@33 -- # echo 0 00:04:53.411 15:30:14 -- setup/common.sh@33 -- # return 0 00:04:53.411 15:30:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.411 15:30:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.411 node0=1025 expecting 1025 00:04:53.411 15:30:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.411 15:30:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.411 15:30:14 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:53.411 15:30:14 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:53.411 00:04:53.411 real 0m0.699s 00:04:53.411 user 0m0.346s 00:04:53.411 sys 0m0.397s 00:04:53.411 15:30:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.411 ************************************ 00:04:53.411 END TEST odd_alloc 00:04:53.411 ************************************ 00:04:53.411 15:30:14 -- common/autotest_common.sh@10 -- # set +x 00:04:53.411 15:30:14 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:53.411 15:30:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.411 15:30:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.411 15:30:14 -- common/autotest_common.sh@10 -- # set +x 00:04:53.411 ************************************ 00:04:53.411 START TEST custom_alloc 00:04:53.411 ************************************ 00:04:53.411 15:30:14 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:53.411 15:30:14 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:53.411 15:30:14 -- setup/hugepages.sh@169 -- # local node 00:04:53.411 15:30:14 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:53.411 15:30:14 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:53.411 15:30:14 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:53.411 15:30:14 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:53.411 15:30:14 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:53.411 15:30:14 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:53.411 15:30:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:53.411 15:30:14 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:53.411 15:30:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:53.411 15:30:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:53.411 15:30:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.411 15:30:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:53.411 15:30:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:53.411 15:30:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.411 15:30:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.411 15:30:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:53.411 15:30:14 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:53.411 15:30:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.411 15:30:14 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:53.411 15:30:14 -- setup/hugepages.sh@83 -- # : 0 00:04:53.411 15:30:14 -- setup/hugepages.sh@84 -- # : 0 00:04:53.411 15:30:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.411 15:30:14 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:53.411 15:30:14 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:53.411 15:30:14 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:53.411 15:30:14 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:53.411 15:30:14 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:53.412 15:30:14 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:53.412 15:30:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:53.412 15:30:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.412 15:30:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:53.412 15:30:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:53.412 15:30:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.412 15:30:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.412 15:30:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:53.412 15:30:14 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:53.412 15:30:14 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:53.412 15:30:14 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:53.412 15:30:14 -- setup/hugepages.sh@78 -- # return 0 00:04:53.412 15:30:14 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:53.412 15:30:14 -- setup/hugepages.sh@187 -- # setup output 00:04:53.412 15:30:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.412 15:30:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:53.980 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.980 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.980 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.980 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.980 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.980 15:30:15 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:53.980 15:30:15 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:53.980 15:30:15 -- setup/hugepages.sh@89 -- # local node 00:04:53.980 15:30:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.980 15:30:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.980 15:30:15 -- setup/hugepages.sh@92 -- # local surp 00:04:53.980 15:30:15 -- setup/hugepages.sh@93 -- # local resv 00:04:53.980 15:30:15 -- setup/hugepages.sh@94 -- # local anon 00:04:53.980 15:30:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.980 15:30:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.980 15:30:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.980 15:30:15 -- setup/common.sh@18 -- # local node= 00:04:53.980 15:30:15 -- setup/common.sh@19 -- # local var val 00:04:53.980 15:30:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.980 15:30:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.980 15:30:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.980 15:30:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.980 15:30:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.980 15:30:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9021168 kB' 'MemAvailable: 10561208 kB' 'Buffers: 2436 kB' 'Cached: 1754192 kB' 'SwapCached: 0 kB' 'Active: 460128 kB' 'Inactive: 1414156 kB' 'Active(anon): 128128 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 119228 kB' 'Mapped: 48612 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 134080 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71984 kB' 'KernelStack: 6288 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 344624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.980 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.980 15:30:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.981 15:30:15 -- setup/common.sh@32 -- # continue 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.981 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.242 15:30:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.242 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.242 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.242 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.242 15:30:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.242 15:30:15 -- setup/common.sh@33 -- # echo 0 00:04:54.242 15:30:15 -- setup/common.sh@33 -- # return 0 00:04:54.242 15:30:15 -- setup/hugepages.sh@97 -- # anon=0 00:04:54.242 15:30:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.242 15:30:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.242 15:30:15 -- setup/common.sh@18 -- # local node= 00:04:54.242 15:30:15 -- setup/common.sh@19 -- # local var val 00:04:54.242 15:30:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.242 15:30:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.242 15:30:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.242 15:30:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.242 15:30:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.242 15:30:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.242 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.242 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.242 15:30:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9021168 kB' 'MemAvailable: 10561208 kB' 'Buffers: 2436 kB' 'Cached: 1754192 kB' 'SwapCached: 0 kB' 'Active: 459628 kB' 'Inactive: 1414156 kB' 'Active(anon): 127628 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118800 kB' 'Mapped: 48552 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 134068 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71972 kB' 'KernelStack: 6288 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 344624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:54.242 15:30:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.242 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.242 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.242 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.242 15:30:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.242 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.242 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.242 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.242 15:30:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.242 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.242 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.242 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.242 15:30:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.242 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.242 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.242 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.242 15:30:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.242 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.242 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.242 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.243 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.243 15:30:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.244 15:30:15 -- setup/common.sh@33 -- # echo 0 00:04:54.244 15:30:15 -- setup/common.sh@33 -- # return 0 00:04:54.244 15:30:15 -- setup/hugepages.sh@99 -- # surp=0 00:04:54.244 15:30:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.244 15:30:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.244 15:30:15 -- setup/common.sh@18 -- # local node= 00:04:54.244 15:30:15 -- setup/common.sh@19 -- # local var val 00:04:54.244 15:30:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.244 15:30:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.244 15:30:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.244 15:30:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.244 15:30:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.244 15:30:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9021168 kB' 'MemAvailable: 10561208 kB' 'Buffers: 2436 kB' 'Cached: 1754192 kB' 'SwapCached: 0 kB' 'Active: 459576 kB' 'Inactive: 1414156 kB' 'Active(anon): 127576 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118696 kB' 'Mapped: 48448 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 134060 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71964 kB' 'KernelStack: 6272 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 344624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.244 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.244 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.245 15:30:15 -- setup/common.sh@33 -- # echo 0 00:04:54.245 15:30:15 -- setup/common.sh@33 -- # return 0 00:04:54.245 nr_hugepages=512 00:04:54.245 resv_hugepages=0 00:04:54.245 surplus_hugepages=0 00:04:54.245 anon_hugepages=0 00:04:54.245 15:30:15 -- setup/hugepages.sh@100 -- # resv=0 00:04:54.245 15:30:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:54.245 15:30:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.245 15:30:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.245 15:30:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.245 15:30:15 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:54.245 15:30:15 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:54.245 15:30:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.245 15:30:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.245 15:30:15 -- setup/common.sh@18 -- # local node= 00:04:54.245 15:30:15 -- setup/common.sh@19 -- # local var val 00:04:54.245 15:30:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.245 15:30:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.245 15:30:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.245 15:30:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.245 15:30:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.245 15:30:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9021168 kB' 'MemAvailable: 10561208 kB' 'Buffers: 2436 kB' 'Cached: 1754192 kB' 'SwapCached: 0 kB' 'Active: 459344 kB' 'Inactive: 1414156 kB' 'Active(anon): 127344 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118728 kB' 'Mapped: 48448 kB' 'Shmem: 10472 kB' 'KReclaimable: 62096 kB' 'Slab: 134052 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71956 kB' 'KernelStack: 6272 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 344624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.245 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.245 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.246 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.246 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.247 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.247 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.247 15:30:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.247 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.247 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.247 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.247 15:30:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.247 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.247 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.247 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.247 15:30:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.247 15:30:15 -- setup/common.sh@33 -- # echo 512 00:04:54.247 15:30:15 -- setup/common.sh@33 -- # return 0 00:04:54.247 15:30:15 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:54.247 15:30:15 -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.247 15:30:15 -- setup/hugepages.sh@27 -- # local node 00:04:54.247 15:30:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.247 15:30:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:54.247 15:30:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:54.247 15:30:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.247 15:30:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.247 15:30:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.247 15:30:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.247 15:30:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.247 15:30:15 -- setup/common.sh@18 -- # local node=0 00:04:54.247 15:30:15 -- setup/common.sh@19 -- # local var val 00:04:54.247 15:30:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.247 15:30:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.247 15:30:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.247 15:30:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.247 15:30:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.247 15:30:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.247 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.247 15:30:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9021168 kB' 'MemUsed: 3220804 kB' 'SwapCached: 0 kB' 'Active: 459672 kB' 'Inactive: 1414156 kB' 'Active(anon): 127672 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1756628 kB' 'Mapped: 48708 kB' 'AnonPages: 119108 kB' 'Shmem: 10472 kB' 'KernelStack: 6320 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62096 kB' 'Slab: 134052 kB' 'SReclaimable: 62096 kB' 'SUnreclaim: 71956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:54.247 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.247 15:30:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.247 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.248 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.248 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # continue 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.249 15:30:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.249 15:30:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.249 15:30:15 -- setup/common.sh@33 -- # echo 0 00:04:54.249 15:30:15 -- setup/common.sh@33 -- # return 0 00:04:54.249 15:30:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.249 15:30:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.249 15:30:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.249 15:30:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.249 node0=512 expecting 512 00:04:54.249 15:30:15 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:54.249 15:30:15 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:54.249 00:04:54.249 real 0m0.746s 00:04:54.249 user 0m0.337s 00:04:54.249 sys 0m0.426s 00:04:54.249 15:30:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.249 15:30:15 -- common/autotest_common.sh@10 -- # set +x 00:04:54.249 ************************************ 00:04:54.249 END TEST custom_alloc 00:04:54.249 ************************************ 00:04:54.249 15:30:15 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:54.249 15:30:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.249 15:30:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.249 15:30:15 -- common/autotest_common.sh@10 -- # set +x 00:04:54.249 ************************************ 00:04:54.249 START TEST no_shrink_alloc 00:04:54.249 ************************************ 00:04:54.249 15:30:15 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:54.249 15:30:15 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:54.249 15:30:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:54.249 15:30:15 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:54.249 15:30:15 -- setup/hugepages.sh@51 -- # shift 00:04:54.249 15:30:15 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:54.249 15:30:15 -- setup/hugepages.sh@52 -- # local node_ids 00:04:54.249 15:30:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.249 15:30:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:54.249 15:30:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:54.249 15:30:15 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:54.249 15:30:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.249 15:30:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:54.249 15:30:15 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:54.249 15:30:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.249 15:30:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.249 15:30:15 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:54.249 15:30:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:54.249 15:30:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:54.249 15:30:15 -- setup/hugepages.sh@73 -- # return 0 00:04:54.249 15:30:15 -- setup/hugepages.sh@198 -- # setup output 00:04:54.249 15:30:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.249 15:30:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:54.819 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.819 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.819 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.819 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.819 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.819 15:30:16 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:54.819 15:30:16 -- setup/hugepages.sh@89 -- # local node 00:04:54.819 15:30:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.819 15:30:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.819 15:30:16 -- setup/hugepages.sh@92 -- # local surp 00:04:54.819 15:30:16 -- setup/hugepages.sh@93 -- # local resv 00:04:54.819 15:30:16 -- setup/hugepages.sh@94 -- # local anon 00:04:54.819 15:30:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.819 15:30:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.819 15:30:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.819 15:30:16 -- setup/common.sh@18 -- # local node= 00:04:54.819 15:30:16 -- setup/common.sh@19 -- # local var val 00:04:54.819 15:30:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.819 15:30:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.819 15:30:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.819 15:30:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.819 15:30:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.819 15:30:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7974784 kB' 'MemAvailable: 9514820 kB' 'Buffers: 2436 kB' 'Cached: 1754192 kB' 'SwapCached: 0 kB' 'Active: 457392 kB' 'Inactive: 1414156 kB' 'Active(anon): 125392 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116276 kB' 'Mapped: 48172 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 133996 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 71908 kB' 'KernelStack: 6240 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.819 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.819 15:30:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.820 15:30:16 -- setup/common.sh@33 -- # echo 0 00:04:54.820 15:30:16 -- setup/common.sh@33 -- # return 0 00:04:54.820 15:30:16 -- setup/hugepages.sh@97 -- # anon=0 00:04:54.820 15:30:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.820 15:30:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.820 15:30:16 -- setup/common.sh@18 -- # local node= 00:04:54.820 15:30:16 -- setup/common.sh@19 -- # local var val 00:04:54.820 15:30:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.820 15:30:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.820 15:30:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.820 15:30:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.820 15:30:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.820 15:30:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7975852 kB' 'MemAvailable: 9515888 kB' 'Buffers: 2436 kB' 'Cached: 1754192 kB' 'SwapCached: 0 kB' 'Active: 456644 kB' 'Inactive: 1414156 kB' 'Active(anon): 124644 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115792 kB' 'Mapped: 47968 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 133980 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 71892 kB' 'KernelStack: 6196 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.820 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.820 15:30:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.821 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.821 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.822 15:30:16 -- setup/common.sh@33 -- # echo 0 00:04:54.822 15:30:16 -- setup/common.sh@33 -- # return 0 00:04:54.822 15:30:16 -- setup/hugepages.sh@99 -- # surp=0 00:04:54.822 15:30:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.822 15:30:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.822 15:30:16 -- setup/common.sh@18 -- # local node= 00:04:54.822 15:30:16 -- setup/common.sh@19 -- # local var val 00:04:54.822 15:30:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.822 15:30:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.822 15:30:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.822 15:30:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.822 15:30:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.822 15:30:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7975852 kB' 'MemAvailable: 9515888 kB' 'Buffers: 2436 kB' 'Cached: 1754192 kB' 'SwapCached: 0 kB' 'Active: 456676 kB' 'Inactive: 1414156 kB' 'Active(anon): 124676 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115896 kB' 'Mapped: 47968 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 133980 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 71892 kB' 'KernelStack: 6212 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.822 15:30:16 -- setup/common.sh@32 -- # continue 00:04:54.822 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.823 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.823 15:30:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.083 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.083 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.083 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.083 15:30:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.083 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.083 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.083 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.083 15:30:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.083 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.083 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.083 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.083 15:30:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.083 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.083 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.083 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.083 15:30:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.083 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.084 15:30:16 -- setup/common.sh@33 -- # echo 0 00:04:55.084 15:30:16 -- setup/common.sh@33 -- # return 0 00:04:55.084 15:30:16 -- setup/hugepages.sh@100 -- # resv=0 00:04:55.084 nr_hugepages=1024 00:04:55.084 15:30:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:55.084 resv_hugepages=0 00:04:55.084 15:30:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.084 15:30:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.084 surplus_hugepages=0 00:04:55.084 anon_hugepages=0 00:04:55.084 15:30:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.084 15:30:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.084 15:30:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:55.084 15:30:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.084 15:30:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.084 15:30:16 -- setup/common.sh@18 -- # local node= 00:04:55.084 15:30:16 -- setup/common.sh@19 -- # local var val 00:04:55.084 15:30:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.084 15:30:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.084 15:30:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.084 15:30:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.084 15:30:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.084 15:30:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7975852 kB' 'MemAvailable: 9515888 kB' 'Buffers: 2436 kB' 'Cached: 1754192 kB' 'SwapCached: 0 kB' 'Active: 456712 kB' 'Inactive: 1414156 kB' 'Active(anon): 124712 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115932 kB' 'Mapped: 47968 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 133980 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 71892 kB' 'KernelStack: 6228 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.084 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.084 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.085 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.085 15:30:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.085 15:30:16 -- setup/common.sh@33 -- # echo 1024 00:04:55.085 15:30:16 -- setup/common.sh@33 -- # return 0 00:04:55.085 15:30:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.085 15:30:16 -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.085 15:30:16 -- setup/hugepages.sh@27 -- # local node 00:04:55.085 15:30:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.085 15:30:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:55.085 15:30:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:55.085 15:30:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.085 15:30:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.085 15:30:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.085 15:30:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.085 15:30:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.085 15:30:16 -- setup/common.sh@18 -- # local node=0 00:04:55.085 15:30:16 -- setup/common.sh@19 -- # local var val 00:04:55.086 15:30:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.086 15:30:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.086 15:30:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.086 15:30:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.086 15:30:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.086 15:30:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7975600 kB' 'MemUsed: 4266372 kB' 'SwapCached: 0 kB' 'Active: 456672 kB' 'Inactive: 1414156 kB' 'Active(anon): 124672 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1756628 kB' 'Mapped: 47968 kB' 'AnonPages: 115660 kB' 'Shmem: 10472 kB' 'KernelStack: 6212 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62088 kB' 'Slab: 133980 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 71892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.086 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.086 15:30:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.087 15:30:16 -- setup/common.sh@32 -- # continue 00:04:55.087 15:30:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.087 15:30:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.087 15:30:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.087 15:30:16 -- setup/common.sh@33 -- # echo 0 00:04:55.087 15:30:16 -- setup/common.sh@33 -- # return 0 00:04:55.087 15:30:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.087 15:30:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.087 15:30:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.087 15:30:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.087 node0=1024 expecting 1024 00:04:55.087 15:30:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:55.087 15:30:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:55.087 15:30:16 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:55.087 15:30:16 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:55.087 15:30:16 -- setup/hugepages.sh@202 -- # setup output 00:04:55.087 15:30:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.087 15:30:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:55.658 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.658 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.658 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.658 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.658 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.658 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:55.658 15:30:17 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:55.658 15:30:17 -- setup/hugepages.sh@89 -- # local node 00:04:55.658 15:30:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.658 15:30:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.658 15:30:17 -- setup/hugepages.sh@92 -- # local surp 00:04:55.658 15:30:17 -- setup/hugepages.sh@93 -- # local resv 00:04:55.658 15:30:17 -- setup/hugepages.sh@94 -- # local anon 00:04:55.658 15:30:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.658 15:30:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.658 15:30:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.658 15:30:17 -- setup/common.sh@18 -- # local node= 00:04:55.658 15:30:17 -- setup/common.sh@19 -- # local var val 00:04:55.658 15:30:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.658 15:30:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.658 15:30:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.658 15:30:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.658 15:30:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.658 15:30:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.658 15:30:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7973824 kB' 'MemAvailable: 9513864 kB' 'Buffers: 2436 kB' 'Cached: 1754196 kB' 'SwapCached: 0 kB' 'Active: 457200 kB' 'Inactive: 1414160 kB' 'Active(anon): 125200 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414160 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116616 kB' 'Mapped: 48268 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 133940 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 71852 kB' 'KernelStack: 6364 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.658 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.658 15:30:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.659 15:30:17 -- setup/common.sh@33 -- # echo 0 00:04:55.659 15:30:17 -- setup/common.sh@33 -- # return 0 00:04:55.659 15:30:17 -- setup/hugepages.sh@97 -- # anon=0 00:04:55.659 15:30:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.659 15:30:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.659 15:30:17 -- setup/common.sh@18 -- # local node= 00:04:55.659 15:30:17 -- setup/common.sh@19 -- # local var val 00:04:55.659 15:30:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.659 15:30:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.659 15:30:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.659 15:30:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.659 15:30:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.659 15:30:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7974076 kB' 'MemAvailable: 9514116 kB' 'Buffers: 2436 kB' 'Cached: 1754196 kB' 'SwapCached: 0 kB' 'Active: 456632 kB' 'Inactive: 1414160 kB' 'Active(anon): 124632 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414160 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115984 kB' 'Mapped: 48000 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 133944 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 71856 kB' 'KernelStack: 6208 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.659 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.659 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.660 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.660 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.661 15:30:17 -- setup/common.sh@33 -- # echo 0 00:04:55.661 15:30:17 -- setup/common.sh@33 -- # return 0 00:04:55.661 15:30:17 -- setup/hugepages.sh@99 -- # surp=0 00:04:55.661 15:30:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.661 15:30:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.661 15:30:17 -- setup/common.sh@18 -- # local node= 00:04:55.661 15:30:17 -- setup/common.sh@19 -- # local var val 00:04:55.661 15:30:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.661 15:30:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.661 15:30:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.661 15:30:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.661 15:30:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.661 15:30:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7974076 kB' 'MemAvailable: 9514116 kB' 'Buffers: 2436 kB' 'Cached: 1754196 kB' 'SwapCached: 0 kB' 'Active: 456504 kB' 'Inactive: 1414160 kB' 'Active(anon): 124504 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414160 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115868 kB' 'Mapped: 47892 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 133944 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 71856 kB' 'KernelStack: 6176 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.661 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.661 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.662 15:30:17 -- setup/common.sh@33 -- # echo 0 00:04:55.662 15:30:17 -- setup/common.sh@33 -- # return 0 00:04:55.662 15:30:17 -- setup/hugepages.sh@100 -- # resv=0 00:04:55.662 nr_hugepages=1024 00:04:55.662 15:30:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:55.662 resv_hugepages=0 00:04:55.662 15:30:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.662 15:30:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.662 surplus_hugepages=0 00:04:55.662 anon_hugepages=0 00:04:55.662 15:30:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.662 15:30:17 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.662 15:30:17 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:55.662 15:30:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.662 15:30:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.662 15:30:17 -- setup/common.sh@18 -- # local node= 00:04:55.662 15:30:17 -- setup/common.sh@19 -- # local var val 00:04:55.662 15:30:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.662 15:30:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.662 15:30:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.662 15:30:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.662 15:30:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.662 15:30:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7974076 kB' 'MemAvailable: 9514116 kB' 'Buffers: 2436 kB' 'Cached: 1754196 kB' 'SwapCached: 0 kB' 'Active: 456528 kB' 'Inactive: 1414160 kB' 'Active(anon): 124528 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414160 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115924 kB' 'Mapped: 47892 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 133944 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 71856 kB' 'KernelStack: 6192 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.662 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.662 15:30:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.663 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.663 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.664 15:30:17 -- setup/common.sh@33 -- # echo 1024 00:04:55.664 15:30:17 -- setup/common.sh@33 -- # return 0 00:04:55.664 15:30:17 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.664 15:30:17 -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.664 15:30:17 -- setup/hugepages.sh@27 -- # local node 00:04:55.664 15:30:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.664 15:30:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:55.664 15:30:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:55.664 15:30:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.664 15:30:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.664 15:30:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.664 15:30:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.664 15:30:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.664 15:30:17 -- setup/common.sh@18 -- # local node=0 00:04:55.664 15:30:17 -- setup/common.sh@19 -- # local var val 00:04:55.664 15:30:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.664 15:30:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.664 15:30:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.664 15:30:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.664 15:30:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.664 15:30:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7974328 kB' 'MemUsed: 4267644 kB' 'SwapCached: 0 kB' 'Active: 456536 kB' 'Inactive: 1414160 kB' 'Active(anon): 124536 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1414160 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1756632 kB' 'Mapped: 47892 kB' 'AnonPages: 115660 kB' 'Shmem: 10472 kB' 'KernelStack: 6192 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62088 kB' 'Slab: 133944 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 71856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.664 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.664 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # continue 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 15:30:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 15:30:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.665 15:30:17 -- setup/common.sh@33 -- # echo 0 00:04:55.665 15:30:17 -- setup/common.sh@33 -- # return 0 00:04:55.665 15:30:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.665 15:30:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.665 15:30:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.665 15:30:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.665 node0=1024 expecting 1024 00:04:55.665 15:30:17 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:55.665 15:30:17 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:55.665 00:04:55.665 real 0m1.428s 00:04:55.665 user 0m0.682s 00:04:55.665 sys 0m0.838s 00:04:55.665 15:30:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.665 15:30:17 -- common/autotest_common.sh@10 -- # set +x 00:04:55.665 ************************************ 00:04:55.665 END TEST no_shrink_alloc 00:04:55.665 ************************************ 00:04:55.665 15:30:17 -- setup/hugepages.sh@217 -- # clear_hp 00:04:55.665 15:30:17 -- setup/hugepages.sh@37 -- # local node hp 00:04:55.665 15:30:17 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:55.665 15:30:17 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:55.665 15:30:17 -- setup/hugepages.sh@41 -- # echo 0 00:04:55.665 15:30:17 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:55.665 15:30:17 -- setup/hugepages.sh@41 -- # echo 0 00:04:55.665 15:30:17 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:55.665 15:30:17 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:55.665 00:04:55.665 real 0m6.190s 00:04:55.665 user 0m2.875s 00:04:55.665 sys 0m3.508s 00:04:55.665 15:30:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.665 15:30:17 -- common/autotest_common.sh@10 -- # set +x 00:04:55.665 ************************************ 00:04:55.665 END TEST hugepages 00:04:55.665 ************************************ 00:04:55.924 15:30:17 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:55.924 15:30:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.924 15:30:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.924 15:30:17 -- common/autotest_common.sh@10 -- # set +x 00:04:55.924 ************************************ 00:04:55.924 START TEST driver 00:04:55.924 ************************************ 00:04:55.924 15:30:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:55.924 * Looking for test storage... 00:04:55.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:55.924 15:30:17 -- setup/driver.sh@68 -- # setup reset 00:04:55.924 15:30:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:55.924 15:30:17 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:02.484 15:30:23 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:02.484 15:30:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:02.484 15:30:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:02.484 15:30:23 -- common/autotest_common.sh@10 -- # set +x 00:05:02.484 ************************************ 00:05:02.484 START TEST guess_driver 00:05:02.484 ************************************ 00:05:02.484 15:30:23 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:02.484 15:30:23 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:02.484 15:30:23 -- setup/driver.sh@47 -- # local fail=0 00:05:02.484 15:30:23 -- setup/driver.sh@49 -- # pick_driver 00:05:02.484 15:30:23 -- setup/driver.sh@36 -- # vfio 00:05:02.484 15:30:23 -- setup/driver.sh@21 -- # local iommu_grups 00:05:02.484 15:30:23 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:02.484 15:30:23 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:02.484 15:30:23 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:02.484 15:30:23 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:02.484 15:30:23 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:02.484 15:30:23 -- setup/driver.sh@32 -- # return 1 00:05:02.484 15:30:23 -- setup/driver.sh@38 -- # uio 00:05:02.484 15:30:23 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:02.484 15:30:23 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:02.484 15:30:23 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:02.484 15:30:23 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:02.484 15:30:23 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:02.484 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:02.484 15:30:23 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:02.484 15:30:23 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:02.484 15:30:23 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:02.484 Looking for driver=uio_pci_generic 00:05:02.484 15:30:23 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:02.484 15:30:23 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.484 15:30:23 -- setup/driver.sh@45 -- # setup output config 00:05:02.484 15:30:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.484 15:30:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:02.742 15:30:24 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:02.742 15:30:24 -- setup/driver.sh@58 -- # continue 00:05:02.742 15:30:24 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.000 15:30:24 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.000 15:30:24 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:03.000 15:30:24 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.000 15:30:24 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.000 15:30:24 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:03.000 15:30:24 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.000 15:30:24 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.000 15:30:24 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:03.000 15:30:24 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.000 15:30:24 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.000 15:30:24 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:03.000 15:30:24 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.000 15:30:24 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:03.000 15:30:24 -- setup/driver.sh@65 -- # setup reset 00:05:03.000 15:30:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:03.000 15:30:24 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:09.564 00:05:09.564 real 0m7.176s 00:05:09.564 user 0m0.827s 00:05:09.564 sys 0m1.482s 00:05:09.564 15:30:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.564 15:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:09.564 ************************************ 00:05:09.564 END TEST guess_driver 00:05:09.564 ************************************ 00:05:09.564 00:05:09.564 real 0m13.207s 00:05:09.564 user 0m1.190s 00:05:09.564 sys 0m2.316s 00:05:09.564 15:30:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.564 15:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:09.564 ************************************ 00:05:09.564 END TEST driver 00:05:09.564 ************************************ 00:05:09.564 15:30:30 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:09.564 15:30:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.564 15:30:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.564 15:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:09.564 ************************************ 00:05:09.564 START TEST devices 00:05:09.564 ************************************ 00:05:09.564 15:30:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:09.564 * Looking for test storage... 00:05:09.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:09.564 15:30:30 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:09.564 15:30:30 -- setup/devices.sh@192 -- # setup reset 00:05:09.564 15:30:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.564 15:30:30 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.500 15:30:31 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:10.500 15:30:31 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:10.500 15:30:31 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:10.500 15:30:31 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:10.500 15:30:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:10.500 15:30:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:05:10.500 15:30:31 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:05:10.500 15:30:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:05:10.500 15:30:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:10.500 15:30:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:10.500 15:30:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:10.500 15:30:31 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:10.500 15:30:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:10.500 15:30:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:10.500 15:30:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:10.500 15:30:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:10.500 15:30:31 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:10.500 15:30:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:10.500 15:30:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:10.500 15:30:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:10.500 15:30:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:10.500 15:30:31 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:10.500 15:30:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:10.500 15:30:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:10.500 15:30:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:10.500 15:30:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:10.500 15:30:31 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:10.500 15:30:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:10.500 15:30:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:10.500 15:30:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:10.500 15:30:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:05:10.500 15:30:31 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:05:10.500 15:30:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:10.500 15:30:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:10.500 15:30:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:10.500 15:30:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:05:10.500 15:30:31 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:05:10.500 15:30:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:10.500 15:30:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:10.500 15:30:31 -- setup/devices.sh@196 -- # blocks=() 00:05:10.500 15:30:31 -- setup/devices.sh@196 -- # declare -a blocks 00:05:10.500 15:30:31 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:10.500 15:30:31 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:10.500 15:30:31 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:10.500 15:30:31 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:10.500 15:30:31 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:10.500 15:30:31 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:10.500 15:30:31 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:05:10.500 15:30:31 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:05:10.500 15:30:31 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:10.500 15:30:31 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:10.500 15:30:31 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:10.500 No valid GPT data, bailing 00:05:10.500 15:30:31 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:10.500 15:30:31 -- scripts/common.sh@393 -- # pt= 00:05:10.500 15:30:31 -- scripts/common.sh@394 -- # return 1 00:05:10.500 15:30:31 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:10.500 15:30:31 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:10.500 15:30:31 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:10.501 15:30:31 -- setup/common.sh@80 -- # echo 1073741824 00:05:10.501 15:30:31 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:05:10.501 15:30:31 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:10.501 15:30:31 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:10.501 15:30:31 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:10.501 15:30:31 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:05:10.501 15:30:31 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:05:10.501 15:30:31 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:10.501 15:30:31 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:10.501 15:30:31 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:10.501 No valid GPT data, bailing 00:05:10.501 15:30:31 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:10.501 15:30:31 -- scripts/common.sh@393 -- # pt= 00:05:10.501 15:30:31 -- scripts/common.sh@394 -- # return 1 00:05:10.501 15:30:31 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:10.501 15:30:31 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:10.501 15:30:31 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:10.501 15:30:31 -- setup/common.sh@80 -- # echo 4294967296 00:05:10.501 15:30:31 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:10.501 15:30:31 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:10.501 15:30:31 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:05:10.501 15:30:31 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:10.501 15:30:31 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:10.501 15:30:31 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:10.501 15:30:31 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:05:10.501 15:30:31 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:05:10.501 15:30:31 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:10.501 15:30:31 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:10.501 15:30:31 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:10.501 No valid GPT data, bailing 00:05:10.501 15:30:31 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:10.501 15:30:31 -- scripts/common.sh@393 -- # pt= 00:05:10.501 15:30:31 -- scripts/common.sh@394 -- # return 1 00:05:10.501 15:30:31 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:10.501 15:30:31 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:10.501 15:30:31 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:10.501 15:30:31 -- setup/common.sh@80 -- # echo 4294967296 00:05:10.501 15:30:31 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:10.501 15:30:31 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:10.501 15:30:31 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:05:10.501 15:30:31 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:10.501 15:30:31 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:10.501 15:30:31 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:10.501 15:30:31 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:05:10.501 15:30:31 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:05:10.501 15:30:31 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:10.501 15:30:31 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:10.501 15:30:31 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:10.501 No valid GPT data, bailing 00:05:10.501 15:30:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:10.501 15:30:32 -- scripts/common.sh@393 -- # pt= 00:05:10.501 15:30:32 -- scripts/common.sh@394 -- # return 1 00:05:10.501 15:30:32 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:10.501 15:30:32 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:10.501 15:30:32 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:10.501 15:30:32 -- setup/common.sh@80 -- # echo 4294967296 00:05:10.501 15:30:32 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:10.501 15:30:32 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:10.501 15:30:32 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:05:10.501 15:30:32 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:10.501 15:30:32 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:05:10.501 15:30:32 -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:10.501 15:30:32 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:10.501 15:30:32 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:10.501 15:30:32 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:05:10.501 15:30:32 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:05:10.501 15:30:32 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:05:10.501 No valid GPT data, bailing 00:05:10.760 15:30:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:10.760 15:30:32 -- scripts/common.sh@393 -- # pt= 00:05:10.760 15:30:32 -- scripts/common.sh@394 -- # return 1 00:05:10.760 15:30:32 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:05:10.760 15:30:32 -- setup/common.sh@76 -- # local dev=nvme2n1 00:05:10.760 15:30:32 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:05:10.760 15:30:32 -- setup/common.sh@80 -- # echo 6343335936 00:05:10.760 15:30:32 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:05:10.760 15:30:32 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:10.760 15:30:32 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:10.760 15:30:32 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:10.760 15:30:32 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:05:10.760 15:30:32 -- setup/devices.sh@201 -- # ctrl=nvme3 00:05:10.760 15:30:32 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:10.760 15:30:32 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:10.760 15:30:32 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:05:10.760 15:30:32 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:05:10.760 15:30:32 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:05:10.760 No valid GPT data, bailing 00:05:10.760 15:30:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:10.760 15:30:32 -- scripts/common.sh@393 -- # pt= 00:05:10.760 15:30:32 -- scripts/common.sh@394 -- # return 1 00:05:10.760 15:30:32 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:05:10.760 15:30:32 -- setup/common.sh@76 -- # local dev=nvme3n1 00:05:10.761 15:30:32 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:05:10.761 15:30:32 -- setup/common.sh@80 -- # echo 5368709120 00:05:10.761 15:30:32 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:10.761 15:30:32 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:10.761 15:30:32 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:10.761 15:30:32 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:05:10.761 15:30:32 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:05:10.761 15:30:32 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:10.761 15:30:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:10.761 15:30:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:10.761 15:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:10.761 ************************************ 00:05:10.761 START TEST nvme_mount 00:05:10.761 ************************************ 00:05:10.761 15:30:32 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:10.761 15:30:32 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:05:10.761 15:30:32 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:05:10.761 15:30:32 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:10.761 15:30:32 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:10.761 15:30:32 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:05:10.761 15:30:32 -- setup/common.sh@39 -- # local disk=nvme1n1 00:05:10.761 15:30:32 -- setup/common.sh@40 -- # local part_no=1 00:05:10.761 15:30:32 -- setup/common.sh@41 -- # local size=1073741824 00:05:10.761 15:30:32 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:10.761 15:30:32 -- setup/common.sh@44 -- # parts=() 00:05:10.761 15:30:32 -- setup/common.sh@44 -- # local parts 00:05:10.761 15:30:32 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:10.761 15:30:32 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:10.761 15:30:32 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:10.761 15:30:32 -- setup/common.sh@46 -- # (( part++ )) 00:05:10.761 15:30:32 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:10.761 15:30:32 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:10.761 15:30:32 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:05:10.761 15:30:32 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:05:11.698 Creating new GPT entries in memory. 00:05:11.698 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:11.698 other utilities. 00:05:11.698 15:30:33 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:11.698 15:30:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:11.698 15:30:33 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:11.698 15:30:33 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:11.698 15:30:33 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:05:13.089 Creating new GPT entries in memory. 00:05:13.089 The operation has completed successfully. 00:05:13.089 15:30:34 -- setup/common.sh@57 -- # (( part++ )) 00:05:13.089 15:30:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:13.089 15:30:34 -- setup/common.sh@62 -- # wait 53971 00:05:13.089 15:30:34 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.089 15:30:34 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:13.089 15:30:34 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.089 15:30:34 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:05:13.089 15:30:34 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:05:13.089 15:30:34 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.089 15:30:34 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:13.089 15:30:34 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:13.089 15:30:34 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:05:13.089 15:30:34 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.089 15:30:34 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:13.089 15:30:34 -- setup/devices.sh@53 -- # local found=0 00:05:13.089 15:30:34 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:13.089 15:30:34 -- setup/devices.sh@56 -- # : 00:05:13.089 15:30:34 -- setup/devices.sh@59 -- # local pci status 00:05:13.089 15:30:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.089 15:30:34 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:13.089 15:30:34 -- setup/devices.sh@47 -- # setup output config 00:05:13.089 15:30:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.089 15:30:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:13.089 15:30:34 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:13.089 15:30:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.089 15:30:34 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:13.089 15:30:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.346 15:30:34 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:13.346 15:30:34 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:05:13.346 15:30:34 -- setup/devices.sh@63 -- # found=1 00:05:13.346 15:30:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.346 15:30:34 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:13.346 15:30:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.605 15:30:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:13.605 15:30:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.605 15:30:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:13.605 15:30:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.605 15:30:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:13.605 15:30:35 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:13.605 15:30:35 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.605 15:30:35 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:13.605 15:30:35 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:13.605 15:30:35 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:13.605 15:30:35 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.605 15:30:35 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.605 15:30:35 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:13.605 15:30:35 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:05:13.605 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:13.605 15:30:35 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:13.605 15:30:35 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:13.864 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:13.864 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:13.864 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:13.864 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:05:13.864 15:30:35 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:13.864 15:30:35 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:13.864 15:30:35 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.864 15:30:35 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:05:13.864 15:30:35 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:05:13.864 15:30:35 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.864 15:30:35 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:13.864 15:30:35 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:13.864 15:30:35 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:05:13.864 15:30:35 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.864 15:30:35 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:13.864 15:30:35 -- setup/devices.sh@53 -- # local found=0 00:05:13.864 15:30:35 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:13.864 15:30:35 -- setup/devices.sh@56 -- # : 00:05:13.864 15:30:35 -- setup/devices.sh@59 -- # local pci status 00:05:13.864 15:30:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.864 15:30:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:13.864 15:30:35 -- setup/devices.sh@47 -- # setup output config 00:05:13.864 15:30:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.864 15:30:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:14.123 15:30:35 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:14.123 15:30:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.123 15:30:35 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:14.123 15:30:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.381 15:30:35 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:14.381 15:30:35 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:05:14.381 15:30:35 -- setup/devices.sh@63 -- # found=1 00:05:14.381 15:30:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.381 15:30:35 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:14.381 15:30:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.650 15:30:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:14.650 15:30:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.650 15:30:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:14.650 15:30:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.650 15:30:36 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.650 15:30:36 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:14.650 15:30:36 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.650 15:30:36 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.650 15:30:36 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:14.650 15:30:36 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.650 15:30:36 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:05:14.650 15:30:36 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:14.650 15:30:36 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:05:14.650 15:30:36 -- setup/devices.sh@50 -- # local mount_point= 00:05:14.650 15:30:36 -- setup/devices.sh@51 -- # local test_file= 00:05:14.650 15:30:36 -- setup/devices.sh@53 -- # local found=0 00:05:14.650 15:30:36 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:14.650 15:30:36 -- setup/devices.sh@59 -- # local pci status 00:05:14.650 15:30:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.650 15:30:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:14.650 15:30:36 -- setup/devices.sh@47 -- # setup output config 00:05:14.650 15:30:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.650 15:30:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:14.922 15:30:36 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:14.922 15:30:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.922 15:30:36 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:14.922 15:30:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.181 15:30:36 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:15.181 15:30:36 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:05:15.181 15:30:36 -- setup/devices.sh@63 -- # found=1 00:05:15.181 15:30:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.440 15:30:36 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:15.440 15:30:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.440 15:30:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:15.440 15:30:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.440 15:30:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:15.440 15:30:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.699 15:30:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.699 15:30:37 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:15.699 15:30:37 -- setup/devices.sh@68 -- # return 0 00:05:15.699 15:30:37 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:15.699 15:30:37 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.699 15:30:37 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:15.699 15:30:37 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:15.699 15:30:37 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:15.699 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:15.699 00:05:15.699 real 0m4.895s 00:05:15.699 user 0m1.189s 00:05:15.699 sys 0m1.425s 00:05:15.699 15:30:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.699 15:30:37 -- common/autotest_common.sh@10 -- # set +x 00:05:15.699 ************************************ 00:05:15.699 END TEST nvme_mount 00:05:15.699 ************************************ 00:05:15.699 15:30:37 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:15.699 15:30:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:15.699 15:30:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.699 15:30:37 -- common/autotest_common.sh@10 -- # set +x 00:05:15.699 ************************************ 00:05:15.699 START TEST dm_mount 00:05:15.699 ************************************ 00:05:15.699 15:30:37 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:15.699 15:30:37 -- setup/devices.sh@144 -- # pv=nvme1n1 00:05:15.699 15:30:37 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:05:15.699 15:30:37 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:05:15.699 15:30:37 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:05:15.699 15:30:37 -- setup/common.sh@39 -- # local disk=nvme1n1 00:05:15.699 15:30:37 -- setup/common.sh@40 -- # local part_no=2 00:05:15.699 15:30:37 -- setup/common.sh@41 -- # local size=1073741824 00:05:15.699 15:30:37 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:15.699 15:30:37 -- setup/common.sh@44 -- # parts=() 00:05:15.699 15:30:37 -- setup/common.sh@44 -- # local parts 00:05:15.699 15:30:37 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:15.699 15:30:37 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.699 15:30:37 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:15.699 15:30:37 -- setup/common.sh@46 -- # (( part++ )) 00:05:15.699 15:30:37 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.699 15:30:37 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:15.699 15:30:37 -- setup/common.sh@46 -- # (( part++ )) 00:05:15.699 15:30:37 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.699 15:30:37 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:15.699 15:30:37 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:05:15.699 15:30:37 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:05:16.636 Creating new GPT entries in memory. 00:05:16.636 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:16.636 other utilities. 00:05:16.636 15:30:38 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:16.636 15:30:38 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:16.636 15:30:38 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:16.636 15:30:38 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:16.636 15:30:38 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:05:18.012 Creating new GPT entries in memory. 00:05:18.012 The operation has completed successfully. 00:05:18.012 15:30:39 -- setup/common.sh@57 -- # (( part++ )) 00:05:18.012 15:30:39 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.012 15:30:39 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:18.012 15:30:39 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:18.012 15:30:39 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:05:18.946 The operation has completed successfully. 00:05:18.946 15:30:40 -- setup/common.sh@57 -- # (( part++ )) 00:05:18.946 15:30:40 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.946 15:30:40 -- setup/common.sh@62 -- # wait 54599 00:05:18.946 15:30:40 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:18.946 15:30:40 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:18.946 15:30:40 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:18.946 15:30:40 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:18.946 15:30:40 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:18.946 15:30:40 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:18.946 15:30:40 -- setup/devices.sh@161 -- # break 00:05:18.946 15:30:40 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:18.946 15:30:40 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:18.946 15:30:40 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:18.946 15:30:40 -- setup/devices.sh@166 -- # dm=dm-0 00:05:18.946 15:30:40 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:05:18.946 15:30:40 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:05:18.946 15:30:40 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:18.946 15:30:40 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:18.946 15:30:40 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:18.946 15:30:40 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:18.946 15:30:40 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:18.946 15:30:40 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:18.946 15:30:40 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:18.946 15:30:40 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:18.946 15:30:40 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:05:18.946 15:30:40 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:18.946 15:30:40 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:18.946 15:30:40 -- setup/devices.sh@53 -- # local found=0 00:05:18.946 15:30:40 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:18.946 15:30:40 -- setup/devices.sh@56 -- # : 00:05:18.946 15:30:40 -- setup/devices.sh@59 -- # local pci status 00:05:18.946 15:30:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.946 15:30:40 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:18.946 15:30:40 -- setup/devices.sh@47 -- # setup output config 00:05:18.946 15:30:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.946 15:30:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:18.946 15:30:40 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:18.946 15:30:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.204 15:30:40 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:19.204 15:30:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.463 15:30:40 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:19.463 15:30:40 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:19.463 15:30:40 -- setup/devices.sh@63 -- # found=1 00:05:19.463 15:30:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.463 15:30:40 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:19.463 15:30:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.463 15:30:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:19.463 15:30:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.721 15:30:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:19.721 15:30:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.721 15:30:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:19.721 15:30:41 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:19.721 15:30:41 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.721 15:30:41 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:19.721 15:30:41 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:19.721 15:30:41 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.721 15:30:41 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:05:19.721 15:30:41 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:19.721 15:30:41 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:05:19.721 15:30:41 -- setup/devices.sh@50 -- # local mount_point= 00:05:19.721 15:30:41 -- setup/devices.sh@51 -- # local test_file= 00:05:19.721 15:30:41 -- setup/devices.sh@53 -- # local found=0 00:05:19.722 15:30:41 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:19.722 15:30:41 -- setup/devices.sh@59 -- # local pci status 00:05:19.722 15:30:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.722 15:30:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:19.722 15:30:41 -- setup/devices.sh@47 -- # setup output config 00:05:19.722 15:30:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.722 15:30:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:19.722 15:30:41 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:19.722 15:30:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.980 15:30:41 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:19.980 15:30:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.239 15:30:41 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:20.239 15:30:41 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:05:20.239 15:30:41 -- setup/devices.sh@63 -- # found=1 00:05:20.239 15:30:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.239 15:30:41 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:20.239 15:30:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.239 15:30:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:20.239 15:30:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.498 15:30:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:20.498 15:30:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.498 15:30:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:20.498 15:30:41 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:20.498 15:30:41 -- setup/devices.sh@68 -- # return 0 00:05:20.498 15:30:41 -- setup/devices.sh@187 -- # cleanup_dm 00:05:20.498 15:30:41 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.498 15:30:41 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:20.498 15:30:41 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:20.498 15:30:42 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:20.498 15:30:42 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:05:20.498 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:20.498 15:30:42 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:05:20.498 15:30:42 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:05:20.498 00:05:20.498 real 0m4.872s 00:05:20.498 user 0m0.764s 00:05:20.498 sys 0m1.043s 00:05:20.498 15:30:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.498 15:30:42 -- common/autotest_common.sh@10 -- # set +x 00:05:20.498 ************************************ 00:05:20.498 END TEST dm_mount 00:05:20.498 ************************************ 00:05:20.498 15:30:42 -- setup/devices.sh@1 -- # cleanup 00:05:20.498 15:30:42 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:20.498 15:30:42 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.498 15:30:42 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:20.498 15:30:42 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:05:20.498 15:30:42 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:20.498 15:30:42 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:20.755 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:20.755 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:20.755 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:20.755 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:05:20.755 15:30:42 -- setup/devices.sh@12 -- # cleanup_dm 00:05:20.755 15:30:42 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.013 15:30:42 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:21.013 15:30:42 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:21.013 15:30:42 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:05:21.013 15:30:42 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:05:21.013 15:30:42 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:05:21.013 00:05:21.013 real 0m11.811s 00:05:21.013 user 0m2.857s 00:05:21.013 sys 0m3.300s 00:05:21.013 15:30:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.013 ************************************ 00:05:21.013 END TEST devices 00:05:21.013 ************************************ 00:05:21.013 15:30:42 -- common/autotest_common.sh@10 -- # set +x 00:05:21.013 00:05:21.013 real 0m42.863s 00:05:21.013 user 0m9.768s 00:05:21.013 sys 0m12.990s 00:05:21.013 15:30:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.013 15:30:42 -- common/autotest_common.sh@10 -- # set +x 00:05:21.013 ************************************ 00:05:21.013 END TEST setup.sh 00:05:21.013 ************************************ 00:05:21.013 15:30:42 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:21.013 Hugepages 00:05:21.013 node hugesize free / total 00:05:21.013 node0 1048576kB 0 / 0 00:05:21.013 node0 2048kB 2048 / 2048 00:05:21.013 00:05:21.013 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:21.271 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:21.271 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:21.271 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:21.529 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:21.529 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:21.529 15:30:42 -- spdk/autotest.sh@141 -- # uname -s 00:05:21.529 15:30:42 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:21.529 15:30:42 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:21.529 15:30:42 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.462 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.462 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:22.462 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:22.720 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:22.720 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:22.720 15:30:44 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:23.655 15:30:45 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:23.655 15:30:45 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:23.655 15:30:45 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:23.655 15:30:45 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:23.655 15:30:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:23.655 15:30:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:23.655 15:30:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:23.655 15:30:45 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:23.655 15:30:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:23.913 15:30:45 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:23.913 15:30:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:23.913 15:30:45 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:24.170 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.428 Waiting for block devices as requested 00:05:24.428 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:05:24.428 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:05:24.686 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:24.686 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:29.953 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:05:29.953 15:30:51 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:29.953 15:30:51 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:29.953 15:30:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:29.953 15:30:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:29.953 15:30:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:29.953 15:30:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:05:29.953 15:30:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:29.953 15:30:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:29.953 15:30:51 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme2 00:05:29.953 15:30:51 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme2 ]] 00:05:29.953 15:30:51 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme2 00:05:29.953 15:30:51 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:29.953 15:30:51 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:29.953 15:30:51 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:29.953 15:30:51 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:29.953 15:30:51 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:29.953 15:30:51 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme2 00:05:29.953 15:30:51 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:29.953 15:30:51 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:29.953 15:30:51 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:29.953 15:30:51 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:29.953 15:30:51 -- common/autotest_common.sh@1542 -- # continue 00:05:29.953 15:30:51 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:29.953 15:30:51 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:29.953 15:30:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:29.953 15:30:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:05:29.953 15:30:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:29.953 15:30:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:05:29.953 15:30:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:29.953 15:30:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:29.953 15:30:51 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme3 00:05:29.953 15:30:51 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme3 ]] 00:05:29.953 15:30:51 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:29.953 15:30:51 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme3 00:05:29.953 15:30:51 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:29.953 15:30:51 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:29.953 15:30:51 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:29.953 15:30:51 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:29.953 15:30:51 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme3 00:05:29.953 15:30:51 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:29.953 15:30:51 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:29.953 15:30:51 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:29.953 15:30:51 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:29.953 15:30:51 -- common/autotest_common.sh@1542 -- # continue 00:05:29.953 15:30:51 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:29.953 15:30:51 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:05:29.953 15:30:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:29.953 15:30:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:08.0/nvme/nvme 00:05:29.953 15:30:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:29.953 15:30:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:05:29.953 15:30:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:29.953 15:30:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:29.953 15:30:51 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:05:29.953 15:30:51 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:05:29.953 15:30:51 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:05:29.953 15:30:51 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:29.953 15:30:51 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:29.953 15:30:51 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:29.953 15:30:51 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:29.953 15:30:51 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:29.953 15:30:51 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:05:29.953 15:30:51 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:29.953 15:30:51 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:29.953 15:30:51 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:29.953 15:30:51 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:29.953 15:30:51 -- common/autotest_common.sh@1542 -- # continue 00:05:29.953 15:30:51 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:29.953 15:30:51 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:05:29.953 15:30:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:29.953 15:30:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:09.0/nvme/nvme 00:05:29.953 15:30:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:29.953 15:30:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:05:29.953 15:30:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:29.953 15:30:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:29.953 15:30:51 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:29.953 15:30:51 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:29.953 15:30:51 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:29.953 15:30:51 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:29.953 15:30:51 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:29.953 15:30:51 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:29.953 15:30:51 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:29.953 15:30:51 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:29.953 15:30:51 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:29.953 15:30:51 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:29.953 15:30:51 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:29.953 15:30:51 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:29.953 15:30:51 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:29.953 15:30:51 -- common/autotest_common.sh@1542 -- # continue 00:05:29.953 15:30:51 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:29.953 15:30:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:29.953 15:30:51 -- common/autotest_common.sh@10 -- # set +x 00:05:29.953 15:30:51 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:29.953 15:30:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:29.953 15:30:51 -- common/autotest_common.sh@10 -- # set +x 00:05:29.953 15:30:51 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:30.886 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.887 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.887 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.887 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:31.145 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:31.145 15:30:52 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:31.145 15:30:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:31.145 15:30:52 -- common/autotest_common.sh@10 -- # set +x 00:05:31.145 15:30:52 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:31.145 15:30:52 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:31.145 15:30:52 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:31.145 15:30:52 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:31.145 15:30:52 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:31.145 15:30:52 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:31.145 15:30:52 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:31.145 15:30:52 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:31.145 15:30:52 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:31.145 15:30:52 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:31.145 15:30:52 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:31.145 15:30:52 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:31.145 15:30:52 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:31.145 15:30:52 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:31.145 15:30:52 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:31.145 15:30:52 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:31.145 15:30:52 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:31.145 15:30:52 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:31.145 15:30:52 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:31.145 15:30:52 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:31.145 15:30:52 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:31.145 15:30:52 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:31.145 15:30:52 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:05:31.145 15:30:52 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:31.145 15:30:52 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:31.145 15:30:52 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:31.145 15:30:52 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:05:31.145 15:30:52 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:31.145 15:30:52 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:31.145 15:30:52 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:31.145 15:30:52 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:31.145 15:30:52 -- common/autotest_common.sh@1578 -- # return 0 00:05:31.145 15:30:52 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:31.145 15:30:52 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:31.145 15:30:52 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:31.145 15:30:52 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:31.145 15:30:52 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:31.145 15:30:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:31.145 15:30:52 -- common/autotest_common.sh@10 -- # set +x 00:05:31.145 15:30:52 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:31.145 15:30:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.145 15:30:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.145 15:30:52 -- common/autotest_common.sh@10 -- # set +x 00:05:31.145 ************************************ 00:05:31.145 START TEST env 00:05:31.145 ************************************ 00:05:31.145 15:30:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:31.403 * Looking for test storage... 00:05:31.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:31.403 15:30:52 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:31.403 15:30:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.403 15:30:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.403 15:30:52 -- common/autotest_common.sh@10 -- # set +x 00:05:31.403 ************************************ 00:05:31.403 START TEST env_memory 00:05:31.403 ************************************ 00:05:31.403 15:30:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:31.403 00:05:31.403 00:05:31.403 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.403 http://cunit.sourceforge.net/ 00:05:31.403 00:05:31.403 00:05:31.403 Suite: memory 00:05:31.403 Test: alloc and free memory map ...[2024-07-24 15:30:52.894762] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:31.403 passed 00:05:31.403 Test: mem map translation ...[2024-07-24 15:30:52.959389] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:31.403 [2024-07-24 15:30:52.959827] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:31.403 [2024-07-24 15:30:52.960185] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:31.403 [2024-07-24 15:30:52.960461] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:31.661 passed 00:05:31.661 Test: mem map registration ...[2024-07-24 15:30:53.059539] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:31.661 [2024-07-24 15:30:53.059805] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:31.661 passed 00:05:31.661 Test: mem map adjacent registrations ...passed 00:05:31.661 00:05:31.661 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.661 suites 1 1 n/a 0 0 00:05:31.661 tests 4 4 4 0 0 00:05:31.661 asserts 152 152 152 0 n/a 00:05:31.661 00:05:31.661 Elapsed time = 0.347 seconds 00:05:31.661 ************************************ 00:05:31.661 END TEST env_memory 00:05:31.661 ************************************ 00:05:31.661 00:05:31.661 real 0m0.392s 00:05:31.661 user 0m0.355s 00:05:31.661 sys 0m0.025s 00:05:31.661 15:30:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.661 15:30:53 -- common/autotest_common.sh@10 -- # set +x 00:05:31.661 15:30:53 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:31.661 15:30:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.661 15:30:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.661 15:30:53 -- common/autotest_common.sh@10 -- # set +x 00:05:31.917 ************************************ 00:05:31.917 START TEST env_vtophys 00:05:31.917 ************************************ 00:05:31.917 15:30:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:31.917 EAL: lib.eal log level changed from notice to debug 00:05:31.918 EAL: Detected lcore 0 as core 0 on socket 0 00:05:31.918 EAL: Detected lcore 1 as core 0 on socket 0 00:05:31.918 EAL: Detected lcore 2 as core 0 on socket 0 00:05:31.918 EAL: Detected lcore 3 as core 0 on socket 0 00:05:31.918 EAL: Detected lcore 4 as core 0 on socket 0 00:05:31.918 EAL: Detected lcore 5 as core 0 on socket 0 00:05:31.918 EAL: Detected lcore 6 as core 0 on socket 0 00:05:31.918 EAL: Detected lcore 7 as core 0 on socket 0 00:05:31.918 EAL: Detected lcore 8 as core 0 on socket 0 00:05:31.918 EAL: Detected lcore 9 as core 0 on socket 0 00:05:31.918 EAL: Maximum logical cores by configuration: 128 00:05:31.918 EAL: Detected CPU lcores: 10 00:05:31.918 EAL: Detected NUMA nodes: 1 00:05:31.918 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:31.918 EAL: Detected shared linkage of DPDK 00:05:31.918 EAL: No shared files mode enabled, IPC will be disabled 00:05:31.918 EAL: Selected IOVA mode 'PA' 00:05:31.918 EAL: Probing VFIO support... 00:05:31.918 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:31.918 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:31.918 EAL: Ask a virtual area of 0x2e000 bytes 00:05:31.918 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:31.918 EAL: Setting up physically contiguous memory... 00:05:31.918 EAL: Setting maximum number of open files to 524288 00:05:31.918 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:31.918 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:31.918 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.918 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:31.918 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.918 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.918 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:31.918 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:31.918 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.918 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:31.918 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.918 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.918 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:31.918 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:31.918 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.918 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:31.918 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.918 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.918 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:31.918 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:31.918 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.918 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:31.918 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.918 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.918 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:31.918 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:31.918 EAL: Hugepages will be freed exactly as allocated. 00:05:31.918 EAL: No shared files mode enabled, IPC is disabled 00:05:31.918 EAL: No shared files mode enabled, IPC is disabled 00:05:31.918 EAL: TSC frequency is ~2200000 KHz 00:05:31.918 EAL: Main lcore 0 is ready (tid=7f9f47b2fa40;cpuset=[0]) 00:05:31.918 EAL: Trying to obtain current memory policy. 00:05:31.918 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.918 EAL: Restoring previous memory policy: 0 00:05:31.918 EAL: request: mp_malloc_sync 00:05:31.918 EAL: No shared files mode enabled, IPC is disabled 00:05:31.918 EAL: Heap on socket 0 was expanded by 2MB 00:05:31.918 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:31.918 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:31.918 EAL: Mem event callback 'spdk:(nil)' registered 00:05:31.918 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:31.918 00:05:31.918 00:05:31.918 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.918 http://cunit.sourceforge.net/ 00:05:31.918 00:05:31.918 00:05:31.918 Suite: components_suite 00:05:32.484 Test: vtophys_malloc_test ...passed 00:05:32.484 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:32.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.484 EAL: Restoring previous memory policy: 4 00:05:32.484 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.484 EAL: request: mp_malloc_sync 00:05:32.484 EAL: No shared files mode enabled, IPC is disabled 00:05:32.484 EAL: Heap on socket 0 was expanded by 4MB 00:05:32.484 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.484 EAL: request: mp_malloc_sync 00:05:32.484 EAL: No shared files mode enabled, IPC is disabled 00:05:32.484 EAL: Heap on socket 0 was shrunk by 4MB 00:05:32.484 EAL: Trying to obtain current memory policy. 00:05:32.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.484 EAL: Restoring previous memory policy: 4 00:05:32.484 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.484 EAL: request: mp_malloc_sync 00:05:32.484 EAL: No shared files mode enabled, IPC is disabled 00:05:32.484 EAL: Heap on socket 0 was expanded by 6MB 00:05:32.484 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.484 EAL: request: mp_malloc_sync 00:05:32.484 EAL: No shared files mode enabled, IPC is disabled 00:05:32.484 EAL: Heap on socket 0 was shrunk by 6MB 00:05:32.484 EAL: Trying to obtain current memory policy. 00:05:32.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.484 EAL: Restoring previous memory policy: 4 00:05:32.484 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.484 EAL: request: mp_malloc_sync 00:05:32.484 EAL: No shared files mode enabled, IPC is disabled 00:05:32.484 EAL: Heap on socket 0 was expanded by 10MB 00:05:32.484 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.484 EAL: request: mp_malloc_sync 00:05:32.484 EAL: No shared files mode enabled, IPC is disabled 00:05:32.484 EAL: Heap on socket 0 was shrunk by 10MB 00:05:32.484 EAL: Trying to obtain current memory policy. 00:05:32.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.484 EAL: Restoring previous memory policy: 4 00:05:32.484 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.484 EAL: request: mp_malloc_sync 00:05:32.484 EAL: No shared files mode enabled, IPC is disabled 00:05:32.484 EAL: Heap on socket 0 was expanded by 18MB 00:05:32.484 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.484 EAL: request: mp_malloc_sync 00:05:32.484 EAL: No shared files mode enabled, IPC is disabled 00:05:32.484 EAL: Heap on socket 0 was shrunk by 18MB 00:05:32.484 EAL: Trying to obtain current memory policy. 00:05:32.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.484 EAL: Restoring previous memory policy: 4 00:05:32.484 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.484 EAL: request: mp_malloc_sync 00:05:32.484 EAL: No shared files mode enabled, IPC is disabled 00:05:32.484 EAL: Heap on socket 0 was expanded by 34MB 00:05:32.484 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.484 EAL: request: mp_malloc_sync 00:05:32.484 EAL: No shared files mode enabled, IPC is disabled 00:05:32.484 EAL: Heap on socket 0 was shrunk by 34MB 00:05:32.484 EAL: Trying to obtain current memory policy. 00:05:32.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.484 EAL: Restoring previous memory policy: 4 00:05:32.484 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.484 EAL: request: mp_malloc_sync 00:05:32.484 EAL: No shared files mode enabled, IPC is disabled 00:05:32.484 EAL: Heap on socket 0 was expanded by 66MB 00:05:32.742 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.742 EAL: request: mp_malloc_sync 00:05:32.742 EAL: No shared files mode enabled, IPC is disabled 00:05:32.742 EAL: Heap on socket 0 was shrunk by 66MB 00:05:32.742 EAL: Trying to obtain current memory policy. 00:05:32.742 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.742 EAL: Restoring previous memory policy: 4 00:05:32.742 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.742 EAL: request: mp_malloc_sync 00:05:32.742 EAL: No shared files mode enabled, IPC is disabled 00:05:32.742 EAL: Heap on socket 0 was expanded by 130MB 00:05:32.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.999 EAL: request: mp_malloc_sync 00:05:32.999 EAL: No shared files mode enabled, IPC is disabled 00:05:32.999 EAL: Heap on socket 0 was shrunk by 130MB 00:05:33.257 EAL: Trying to obtain current memory policy. 00:05:33.257 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.257 EAL: Restoring previous memory policy: 4 00:05:33.257 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.257 EAL: request: mp_malloc_sync 00:05:33.257 EAL: No shared files mode enabled, IPC is disabled 00:05:33.257 EAL: Heap on socket 0 was expanded by 258MB 00:05:33.514 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.514 EAL: request: mp_malloc_sync 00:05:33.514 EAL: No shared files mode enabled, IPC is disabled 00:05:33.514 EAL: Heap on socket 0 was shrunk by 258MB 00:05:33.772 EAL: Trying to obtain current memory policy. 00:05:33.772 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.029 EAL: Restoring previous memory policy: 4 00:05:34.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.029 EAL: request: mp_malloc_sync 00:05:34.029 EAL: No shared files mode enabled, IPC is disabled 00:05:34.029 EAL: Heap on socket 0 was expanded by 514MB 00:05:34.594 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.852 EAL: request: mp_malloc_sync 00:05:34.852 EAL: No shared files mode enabled, IPC is disabled 00:05:34.852 EAL: Heap on socket 0 was shrunk by 514MB 00:05:35.418 EAL: Trying to obtain current memory policy. 00:05:35.418 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.675 EAL: Restoring previous memory policy: 4 00:05:35.675 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.675 EAL: request: mp_malloc_sync 00:05:35.675 EAL: No shared files mode enabled, IPC is disabled 00:05:35.675 EAL: Heap on socket 0 was expanded by 1026MB 00:05:37.047 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.304 EAL: request: mp_malloc_sync 00:05:37.304 EAL: No shared files mode enabled, IPC is disabled 00:05:37.304 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:38.674 passed 00:05:38.674 00:05:38.674 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.674 suites 1 1 n/a 0 0 00:05:38.674 tests 2 2 2 0 0 00:05:38.674 asserts 5411 5411 5411 0 n/a 00:05:38.674 00:05:38.674 Elapsed time = 6.437 seconds 00:05:38.674 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.674 EAL: request: mp_malloc_sync 00:05:38.674 EAL: No shared files mode enabled, IPC is disabled 00:05:38.674 EAL: Heap on socket 0 was shrunk by 2MB 00:05:38.674 EAL: No shared files mode enabled, IPC is disabled 00:05:38.674 EAL: No shared files mode enabled, IPC is disabled 00:05:38.674 EAL: No shared files mode enabled, IPC is disabled 00:05:38.674 00:05:38.674 real 0m6.751s 00:05:38.674 user 0m5.930s 00:05:38.674 sys 0m0.663s 00:05:38.674 15:31:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.674 15:31:00 -- common/autotest_common.sh@10 -- # set +x 00:05:38.674 ************************************ 00:05:38.674 END TEST env_vtophys 00:05:38.674 ************************************ 00:05:38.674 15:31:00 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:38.674 15:31:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.674 15:31:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.674 15:31:00 -- common/autotest_common.sh@10 -- # set +x 00:05:38.674 ************************************ 00:05:38.674 START TEST env_pci 00:05:38.674 ************************************ 00:05:38.674 15:31:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:38.674 00:05:38.674 00:05:38.674 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.674 http://cunit.sourceforge.net/ 00:05:38.674 00:05:38.674 00:05:38.674 Suite: pci 00:05:38.674 Test: pci_hook ...[2024-07-24 15:31:00.105699] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56319 has claimed it 00:05:38.674 passed 00:05:38.674 00:05:38.674 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.674 suites 1 1 n/a 0 0 00:05:38.674 tests 1 1 1 0 0 00:05:38.674 asserts 25 25 25 0 n/a 00:05:38.674 00:05:38.674 Elapsed time = 0.008 seconds 00:05:38.674 EAL: Cannot find device (10000:00:01.0) 00:05:38.674 EAL: Failed to attach device on primary process 00:05:38.674 ************************************ 00:05:38.674 END TEST env_pci 00:05:38.674 ************************************ 00:05:38.674 00:05:38.674 real 0m0.074s 00:05:38.674 user 0m0.037s 00:05:38.674 sys 0m0.036s 00:05:38.674 15:31:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.674 15:31:00 -- common/autotest_common.sh@10 -- # set +x 00:05:38.674 15:31:00 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:38.674 15:31:00 -- env/env.sh@15 -- # uname 00:05:38.674 15:31:00 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:38.674 15:31:00 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:38.674 15:31:00 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.674 15:31:00 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:38.674 15:31:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.674 15:31:00 -- common/autotest_common.sh@10 -- # set +x 00:05:38.674 ************************************ 00:05:38.674 START TEST env_dpdk_post_init 00:05:38.674 ************************************ 00:05:38.674 15:31:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.674 EAL: Detected CPU lcores: 10 00:05:38.674 EAL: Detected NUMA nodes: 1 00:05:38.674 EAL: Detected shared linkage of DPDK 00:05:38.932 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:38.932 EAL: Selected IOVA mode 'PA' 00:05:38.932 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:38.932 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:38.932 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:38.932 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:05:38.932 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:05:38.932 Starting DPDK initialization... 00:05:38.932 Starting SPDK post initialization... 00:05:38.932 SPDK NVMe probe 00:05:38.932 Attaching to 0000:00:06.0 00:05:38.932 Attaching to 0000:00:07.0 00:05:38.932 Attaching to 0000:00:08.0 00:05:38.932 Attaching to 0000:00:09.0 00:05:38.932 Attached to 0000:00:06.0 00:05:38.932 Attached to 0000:00:07.0 00:05:38.932 Attached to 0000:00:09.0 00:05:38.932 Attached to 0000:00:08.0 00:05:38.932 Cleaning up... 00:05:38.932 ************************************ 00:05:38.932 END TEST env_dpdk_post_init 00:05:38.932 ************************************ 00:05:38.932 00:05:38.932 real 0m0.291s 00:05:38.932 user 0m0.114s 00:05:38.932 sys 0m0.078s 00:05:38.932 15:31:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.932 15:31:00 -- common/autotest_common.sh@10 -- # set +x 00:05:39.190 15:31:00 -- env/env.sh@26 -- # uname 00:05:39.190 15:31:00 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:39.190 15:31:00 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.190 15:31:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.190 15:31:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.190 15:31:00 -- common/autotest_common.sh@10 -- # set +x 00:05:39.190 ************************************ 00:05:39.190 START TEST env_mem_callbacks 00:05:39.190 ************************************ 00:05:39.190 15:31:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.190 EAL: Detected CPU lcores: 10 00:05:39.190 EAL: Detected NUMA nodes: 1 00:05:39.190 EAL: Detected shared linkage of DPDK 00:05:39.190 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.190 EAL: Selected IOVA mode 'PA' 00:05:39.190 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.190 00:05:39.190 00:05:39.190 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.190 http://cunit.sourceforge.net/ 00:05:39.190 00:05:39.190 00:05:39.190 Suite: memory 00:05:39.190 Test: test ... 00:05:39.190 register 0x200000200000 2097152 00:05:39.190 malloc 3145728 00:05:39.190 register 0x200000400000 4194304 00:05:39.190 buf 0x2000004fffc0 len 3145728 PASSED 00:05:39.190 malloc 64 00:05:39.190 buf 0x2000004ffec0 len 64 PASSED 00:05:39.190 malloc 4194304 00:05:39.190 register 0x200000800000 6291456 00:05:39.190 buf 0x2000009fffc0 len 4194304 PASSED 00:05:39.190 free 0x2000004fffc0 3145728 00:05:39.190 free 0x2000004ffec0 64 00:05:39.190 unregister 0x200000400000 4194304 PASSED 00:05:39.190 free 0x2000009fffc0 4194304 00:05:39.190 unregister 0x200000800000 6291456 PASSED 00:05:39.190 malloc 8388608 00:05:39.190 register 0x200000400000 10485760 00:05:39.190 buf 0x2000005fffc0 len 8388608 PASSED 00:05:39.190 free 0x2000005fffc0 8388608 00:05:39.190 unregister 0x200000400000 10485760 PASSED 00:05:39.449 passed 00:05:39.449 00:05:39.449 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.449 suites 1 1 n/a 0 0 00:05:39.449 tests 1 1 1 0 0 00:05:39.449 asserts 15 15 15 0 n/a 00:05:39.449 00:05:39.449 Elapsed time = 0.069 seconds 00:05:39.449 ************************************ 00:05:39.449 END TEST env_mem_callbacks 00:05:39.449 ************************************ 00:05:39.449 00:05:39.449 real 0m0.271s 00:05:39.449 user 0m0.103s 00:05:39.449 sys 0m0.063s 00:05:39.449 15:31:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.449 15:31:00 -- common/autotest_common.sh@10 -- # set +x 00:05:39.449 ************************************ 00:05:39.449 END TEST env 00:05:39.449 ************************************ 00:05:39.449 00:05:39.449 real 0m8.128s 00:05:39.449 user 0m6.658s 00:05:39.449 sys 0m1.070s 00:05:39.449 15:31:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.449 15:31:00 -- common/autotest_common.sh@10 -- # set +x 00:05:39.449 15:31:00 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:39.449 15:31:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.449 15:31:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.449 15:31:00 -- common/autotest_common.sh@10 -- # set +x 00:05:39.449 ************************************ 00:05:39.449 START TEST rpc 00:05:39.449 ************************************ 00:05:39.449 15:31:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:39.449 * Looking for test storage... 00:05:39.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:39.449 15:31:00 -- rpc/rpc.sh@65 -- # spdk_pid=56437 00:05:39.449 15:31:00 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.449 15:31:00 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:39.449 15:31:00 -- rpc/rpc.sh@67 -- # waitforlisten 56437 00:05:39.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.449 15:31:00 -- common/autotest_common.sh@819 -- # '[' -z 56437 ']' 00:05:39.449 15:31:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.449 15:31:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:39.449 15:31:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.449 15:31:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:39.449 15:31:00 -- common/autotest_common.sh@10 -- # set +x 00:05:39.708 [2024-07-24 15:31:01.095844] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:39.708 [2024-07-24 15:31:01.096241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56437 ] 00:05:39.708 [2024-07-24 15:31:01.273964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.967 [2024-07-24 15:31:01.490406] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.967 [2024-07-24 15:31:01.490631] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:39.967 [2024-07-24 15:31:01.490656] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56437' to capture a snapshot of events at runtime. 00:05:39.967 [2024-07-24 15:31:01.490671] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56437 for offline analysis/debug. 00:05:39.967 [2024-07-24 15:31:01.490711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.343 15:31:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:41.343 15:31:02 -- common/autotest_common.sh@852 -- # return 0 00:05:41.343 15:31:02 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.343 15:31:02 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.343 15:31:02 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:41.343 15:31:02 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:41.343 15:31:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.343 15:31:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.343 15:31:02 -- common/autotest_common.sh@10 -- # set +x 00:05:41.343 ************************************ 00:05:41.343 START TEST rpc_integrity 00:05:41.343 ************************************ 00:05:41.343 15:31:02 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:41.343 15:31:02 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.343 15:31:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.343 15:31:02 -- common/autotest_common.sh@10 -- # set +x 00:05:41.343 15:31:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.343 15:31:02 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.343 15:31:02 -- rpc/rpc.sh@13 -- # jq length 00:05:41.343 15:31:02 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.343 15:31:02 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.343 15:31:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.343 15:31:02 -- common/autotest_common.sh@10 -- # set +x 00:05:41.343 15:31:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.343 15:31:02 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:41.343 15:31:02 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:41.343 15:31:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.343 15:31:02 -- common/autotest_common.sh@10 -- # set +x 00:05:41.343 15:31:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.343 15:31:02 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:41.343 { 00:05:41.343 "name": "Malloc0", 00:05:41.343 "aliases": [ 00:05:41.343 "1f021fc8-8788-4a58-915d-f2db40be28a3" 00:05:41.343 ], 00:05:41.343 "product_name": "Malloc disk", 00:05:41.343 "block_size": 512, 00:05:41.343 "num_blocks": 16384, 00:05:41.343 "uuid": "1f021fc8-8788-4a58-915d-f2db40be28a3", 00:05:41.343 "assigned_rate_limits": { 00:05:41.343 "rw_ios_per_sec": 0, 00:05:41.343 "rw_mbytes_per_sec": 0, 00:05:41.343 "r_mbytes_per_sec": 0, 00:05:41.343 "w_mbytes_per_sec": 0 00:05:41.343 }, 00:05:41.343 "claimed": false, 00:05:41.343 "zoned": false, 00:05:41.343 "supported_io_types": { 00:05:41.343 "read": true, 00:05:41.343 "write": true, 00:05:41.343 "unmap": true, 00:05:41.343 "write_zeroes": true, 00:05:41.343 "flush": true, 00:05:41.343 "reset": true, 00:05:41.343 "compare": false, 00:05:41.343 "compare_and_write": false, 00:05:41.343 "abort": true, 00:05:41.343 "nvme_admin": false, 00:05:41.343 "nvme_io": false 00:05:41.343 }, 00:05:41.343 "memory_domains": [ 00:05:41.343 { 00:05:41.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.343 "dma_device_type": 2 00:05:41.343 } 00:05:41.343 ], 00:05:41.343 "driver_specific": {} 00:05:41.343 } 00:05:41.343 ]' 00:05:41.343 15:31:02 -- rpc/rpc.sh@17 -- # jq length 00:05:41.343 15:31:02 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:41.343 15:31:02 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:41.343 15:31:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.343 15:31:02 -- common/autotest_common.sh@10 -- # set +x 00:05:41.343 [2024-07-24 15:31:02.915370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:41.343 [2024-07-24 15:31:02.915515] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:41.343 [2024-07-24 15:31:02.915566] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:05:41.343 [2024-07-24 15:31:02.915584] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:41.343 [2024-07-24 15:31:02.918473] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:41.343 [2024-07-24 15:31:02.918526] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:41.343 Passthru0 00:05:41.343 15:31:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.343 15:31:02 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:41.343 15:31:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.343 15:31:02 -- common/autotest_common.sh@10 -- # set +x 00:05:41.606 15:31:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.606 15:31:02 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:41.606 { 00:05:41.606 "name": "Malloc0", 00:05:41.606 "aliases": [ 00:05:41.606 "1f021fc8-8788-4a58-915d-f2db40be28a3" 00:05:41.606 ], 00:05:41.606 "product_name": "Malloc disk", 00:05:41.606 "block_size": 512, 00:05:41.606 "num_blocks": 16384, 00:05:41.606 "uuid": "1f021fc8-8788-4a58-915d-f2db40be28a3", 00:05:41.606 "assigned_rate_limits": { 00:05:41.606 "rw_ios_per_sec": 0, 00:05:41.606 "rw_mbytes_per_sec": 0, 00:05:41.606 "r_mbytes_per_sec": 0, 00:05:41.606 "w_mbytes_per_sec": 0 00:05:41.606 }, 00:05:41.606 "claimed": true, 00:05:41.606 "claim_type": "exclusive_write", 00:05:41.606 "zoned": false, 00:05:41.606 "supported_io_types": { 00:05:41.606 "read": true, 00:05:41.606 "write": true, 00:05:41.606 "unmap": true, 00:05:41.606 "write_zeroes": true, 00:05:41.606 "flush": true, 00:05:41.606 "reset": true, 00:05:41.606 "compare": false, 00:05:41.606 "compare_and_write": false, 00:05:41.606 "abort": true, 00:05:41.606 "nvme_admin": false, 00:05:41.606 "nvme_io": false 00:05:41.606 }, 00:05:41.606 "memory_domains": [ 00:05:41.606 { 00:05:41.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.606 "dma_device_type": 2 00:05:41.606 } 00:05:41.606 ], 00:05:41.606 "driver_specific": {} 00:05:41.606 }, 00:05:41.606 { 00:05:41.606 "name": "Passthru0", 00:05:41.606 "aliases": [ 00:05:41.606 "ba1077b1-d012-56e5-9e26-da616ba0b0fd" 00:05:41.606 ], 00:05:41.606 "product_name": "passthru", 00:05:41.606 "block_size": 512, 00:05:41.606 "num_blocks": 16384, 00:05:41.606 "uuid": "ba1077b1-d012-56e5-9e26-da616ba0b0fd", 00:05:41.606 "assigned_rate_limits": { 00:05:41.606 "rw_ios_per_sec": 0, 00:05:41.606 "rw_mbytes_per_sec": 0, 00:05:41.606 "r_mbytes_per_sec": 0, 00:05:41.606 "w_mbytes_per_sec": 0 00:05:41.606 }, 00:05:41.606 "claimed": false, 00:05:41.606 "zoned": false, 00:05:41.606 "supported_io_types": { 00:05:41.606 "read": true, 00:05:41.606 "write": true, 00:05:41.606 "unmap": true, 00:05:41.606 "write_zeroes": true, 00:05:41.606 "flush": true, 00:05:41.606 "reset": true, 00:05:41.606 "compare": false, 00:05:41.606 "compare_and_write": false, 00:05:41.606 "abort": true, 00:05:41.606 "nvme_admin": false, 00:05:41.606 "nvme_io": false 00:05:41.606 }, 00:05:41.606 "memory_domains": [ 00:05:41.606 { 00:05:41.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.606 "dma_device_type": 2 00:05:41.606 } 00:05:41.606 ], 00:05:41.606 "driver_specific": { 00:05:41.606 "passthru": { 00:05:41.606 "name": "Passthru0", 00:05:41.606 "base_bdev_name": "Malloc0" 00:05:41.606 } 00:05:41.606 } 00:05:41.606 } 00:05:41.606 ]' 00:05:41.606 15:31:02 -- rpc/rpc.sh@21 -- # jq length 00:05:41.606 15:31:02 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:41.606 15:31:02 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:41.606 15:31:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.606 15:31:02 -- common/autotest_common.sh@10 -- # set +x 00:05:41.606 15:31:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.606 15:31:02 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:41.606 15:31:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.606 15:31:02 -- common/autotest_common.sh@10 -- # set +x 00:05:41.606 15:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.606 15:31:03 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.606 15:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.606 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:41.606 15:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.606 15:31:03 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:41.606 15:31:03 -- rpc/rpc.sh@26 -- # jq length 00:05:41.606 ************************************ 00:05:41.606 END TEST rpc_integrity 00:05:41.606 ************************************ 00:05:41.606 15:31:03 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:41.606 00:05:41.606 real 0m0.339s 00:05:41.606 user 0m0.203s 00:05:41.606 sys 0m0.043s 00:05:41.606 15:31:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.606 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:41.606 15:31:03 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:41.606 15:31:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.606 15:31:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.606 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:41.606 ************************************ 00:05:41.606 START TEST rpc_plugins 00:05:41.606 ************************************ 00:05:41.606 15:31:03 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:41.606 15:31:03 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:41.606 15:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.606 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:41.607 15:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.607 15:31:03 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:41.607 15:31:03 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:41.607 15:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.607 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:41.607 15:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.607 15:31:03 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:41.607 { 00:05:41.607 "name": "Malloc1", 00:05:41.607 "aliases": [ 00:05:41.607 "a0495f0e-8af1-47cf-8f0a-f3d6336049eb" 00:05:41.607 ], 00:05:41.607 "product_name": "Malloc disk", 00:05:41.607 "block_size": 4096, 00:05:41.607 "num_blocks": 256, 00:05:41.607 "uuid": "a0495f0e-8af1-47cf-8f0a-f3d6336049eb", 00:05:41.607 "assigned_rate_limits": { 00:05:41.607 "rw_ios_per_sec": 0, 00:05:41.607 "rw_mbytes_per_sec": 0, 00:05:41.607 "r_mbytes_per_sec": 0, 00:05:41.607 "w_mbytes_per_sec": 0 00:05:41.607 }, 00:05:41.607 "claimed": false, 00:05:41.607 "zoned": false, 00:05:41.607 "supported_io_types": { 00:05:41.607 "read": true, 00:05:41.607 "write": true, 00:05:41.607 "unmap": true, 00:05:41.607 "write_zeroes": true, 00:05:41.607 "flush": true, 00:05:41.607 "reset": true, 00:05:41.607 "compare": false, 00:05:41.607 "compare_and_write": false, 00:05:41.607 "abort": true, 00:05:41.607 "nvme_admin": false, 00:05:41.607 "nvme_io": false 00:05:41.607 }, 00:05:41.607 "memory_domains": [ 00:05:41.607 { 00:05:41.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.607 "dma_device_type": 2 00:05:41.607 } 00:05:41.607 ], 00:05:41.607 "driver_specific": {} 00:05:41.607 } 00:05:41.607 ]' 00:05:41.607 15:31:03 -- rpc/rpc.sh@32 -- # jq length 00:05:41.865 15:31:03 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:41.865 15:31:03 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:41.865 15:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.865 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:41.865 15:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.865 15:31:03 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:41.866 15:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.866 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:41.866 15:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.866 15:31:03 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:41.866 15:31:03 -- rpc/rpc.sh@36 -- # jq length 00:05:41.866 ************************************ 00:05:41.866 END TEST rpc_plugins 00:05:41.866 ************************************ 00:05:41.866 15:31:03 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:41.866 00:05:41.866 real 0m0.164s 00:05:41.866 user 0m0.100s 00:05:41.866 sys 0m0.022s 00:05:41.866 15:31:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.866 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:41.866 15:31:03 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:41.866 15:31:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.866 15:31:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.866 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:41.866 ************************************ 00:05:41.866 START TEST rpc_trace_cmd_test 00:05:41.866 ************************************ 00:05:41.866 15:31:03 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:41.866 15:31:03 -- rpc/rpc.sh@40 -- # local info 00:05:41.866 15:31:03 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:41.866 15:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.866 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:41.866 15:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.866 15:31:03 -- rpc/rpc.sh@42 -- # info='{ 00:05:41.866 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56437", 00:05:41.866 "tpoint_group_mask": "0x8", 00:05:41.866 "iscsi_conn": { 00:05:41.866 "mask": "0x2", 00:05:41.866 "tpoint_mask": "0x0" 00:05:41.866 }, 00:05:41.866 "scsi": { 00:05:41.866 "mask": "0x4", 00:05:41.866 "tpoint_mask": "0x0" 00:05:41.866 }, 00:05:41.866 "bdev": { 00:05:41.866 "mask": "0x8", 00:05:41.866 "tpoint_mask": "0xffffffffffffffff" 00:05:41.866 }, 00:05:41.866 "nvmf_rdma": { 00:05:41.866 "mask": "0x10", 00:05:41.866 "tpoint_mask": "0x0" 00:05:41.866 }, 00:05:41.866 "nvmf_tcp": { 00:05:41.866 "mask": "0x20", 00:05:41.866 "tpoint_mask": "0x0" 00:05:41.866 }, 00:05:41.866 "ftl": { 00:05:41.866 "mask": "0x40", 00:05:41.866 "tpoint_mask": "0x0" 00:05:41.866 }, 00:05:41.866 "blobfs": { 00:05:41.866 "mask": "0x80", 00:05:41.866 "tpoint_mask": "0x0" 00:05:41.866 }, 00:05:41.866 "dsa": { 00:05:41.866 "mask": "0x200", 00:05:41.866 "tpoint_mask": "0x0" 00:05:41.866 }, 00:05:41.866 "thread": { 00:05:41.866 "mask": "0x400", 00:05:41.866 "tpoint_mask": "0x0" 00:05:41.866 }, 00:05:41.866 "nvme_pcie": { 00:05:41.866 "mask": "0x800", 00:05:41.866 "tpoint_mask": "0x0" 00:05:41.866 }, 00:05:41.866 "iaa": { 00:05:41.866 "mask": "0x1000", 00:05:41.866 "tpoint_mask": "0x0" 00:05:41.866 }, 00:05:41.866 "nvme_tcp": { 00:05:41.866 "mask": "0x2000", 00:05:41.866 "tpoint_mask": "0x0" 00:05:41.866 }, 00:05:41.866 "bdev_nvme": { 00:05:41.866 "mask": "0x4000", 00:05:41.866 "tpoint_mask": "0x0" 00:05:41.866 } 00:05:41.866 }' 00:05:41.866 15:31:03 -- rpc/rpc.sh@43 -- # jq length 00:05:41.866 15:31:03 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:41.866 15:31:03 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:42.124 15:31:03 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:42.124 15:31:03 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:42.124 15:31:03 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:42.124 15:31:03 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:42.124 15:31:03 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:42.124 15:31:03 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:42.124 ************************************ 00:05:42.124 END TEST rpc_trace_cmd_test 00:05:42.124 ************************************ 00:05:42.124 15:31:03 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:42.124 00:05:42.124 real 0m0.284s 00:05:42.124 user 0m0.247s 00:05:42.124 sys 0m0.028s 00:05:42.124 15:31:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.124 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:42.124 15:31:03 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:42.124 15:31:03 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:42.124 15:31:03 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:42.124 15:31:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.124 15:31:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.124 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:42.124 ************************************ 00:05:42.124 START TEST rpc_daemon_integrity 00:05:42.124 ************************************ 00:05:42.124 15:31:03 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:42.124 15:31:03 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.124 15:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.124 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:42.124 15:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.124 15:31:03 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.382 15:31:03 -- rpc/rpc.sh@13 -- # jq length 00:05:42.382 15:31:03 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.382 15:31:03 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.382 15:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.382 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:42.382 15:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.382 15:31:03 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:42.382 15:31:03 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.382 15:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.382 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:42.382 15:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.382 15:31:03 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.382 { 00:05:42.382 "name": "Malloc2", 00:05:42.382 "aliases": [ 00:05:42.382 "124375ca-2ea2-4644-b7e6-448ece7d0677" 00:05:42.382 ], 00:05:42.382 "product_name": "Malloc disk", 00:05:42.382 "block_size": 512, 00:05:42.382 "num_blocks": 16384, 00:05:42.382 "uuid": "124375ca-2ea2-4644-b7e6-448ece7d0677", 00:05:42.382 "assigned_rate_limits": { 00:05:42.382 "rw_ios_per_sec": 0, 00:05:42.382 "rw_mbytes_per_sec": 0, 00:05:42.382 "r_mbytes_per_sec": 0, 00:05:42.382 "w_mbytes_per_sec": 0 00:05:42.382 }, 00:05:42.382 "claimed": false, 00:05:42.382 "zoned": false, 00:05:42.382 "supported_io_types": { 00:05:42.382 "read": true, 00:05:42.382 "write": true, 00:05:42.382 "unmap": true, 00:05:42.382 "write_zeroes": true, 00:05:42.382 "flush": true, 00:05:42.382 "reset": true, 00:05:42.382 "compare": false, 00:05:42.382 "compare_and_write": false, 00:05:42.382 "abort": true, 00:05:42.382 "nvme_admin": false, 00:05:42.382 "nvme_io": false 00:05:42.382 }, 00:05:42.382 "memory_domains": [ 00:05:42.382 { 00:05:42.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.382 "dma_device_type": 2 00:05:42.382 } 00:05:42.382 ], 00:05:42.382 "driver_specific": {} 00:05:42.382 } 00:05:42.382 ]' 00:05:42.382 15:31:03 -- rpc/rpc.sh@17 -- # jq length 00:05:42.382 15:31:03 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.382 15:31:03 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:42.382 15:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.382 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:42.382 [2024-07-24 15:31:03.872895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:42.382 [2024-07-24 15:31:03.872999] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.382 [2024-07-24 15:31:03.873037] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:05:42.382 [2024-07-24 15:31:03.873058] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.382 [2024-07-24 15:31:03.876425] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.382 [2024-07-24 15:31:03.876507] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.382 Passthru0 00:05:42.382 15:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.382 15:31:03 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.382 15:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.382 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:42.382 15:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.382 15:31:03 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.382 { 00:05:42.382 "name": "Malloc2", 00:05:42.382 "aliases": [ 00:05:42.382 "124375ca-2ea2-4644-b7e6-448ece7d0677" 00:05:42.382 ], 00:05:42.382 "product_name": "Malloc disk", 00:05:42.382 "block_size": 512, 00:05:42.382 "num_blocks": 16384, 00:05:42.382 "uuid": "124375ca-2ea2-4644-b7e6-448ece7d0677", 00:05:42.382 "assigned_rate_limits": { 00:05:42.382 "rw_ios_per_sec": 0, 00:05:42.382 "rw_mbytes_per_sec": 0, 00:05:42.382 "r_mbytes_per_sec": 0, 00:05:42.382 "w_mbytes_per_sec": 0 00:05:42.382 }, 00:05:42.382 "claimed": true, 00:05:42.382 "claim_type": "exclusive_write", 00:05:42.382 "zoned": false, 00:05:42.382 "supported_io_types": { 00:05:42.382 "read": true, 00:05:42.382 "write": true, 00:05:42.382 "unmap": true, 00:05:42.382 "write_zeroes": true, 00:05:42.382 "flush": true, 00:05:42.382 "reset": true, 00:05:42.382 "compare": false, 00:05:42.382 "compare_and_write": false, 00:05:42.382 "abort": true, 00:05:42.382 "nvme_admin": false, 00:05:42.382 "nvme_io": false 00:05:42.382 }, 00:05:42.382 "memory_domains": [ 00:05:42.382 { 00:05:42.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.382 "dma_device_type": 2 00:05:42.382 } 00:05:42.382 ], 00:05:42.382 "driver_specific": {} 00:05:42.382 }, 00:05:42.382 { 00:05:42.382 "name": "Passthru0", 00:05:42.382 "aliases": [ 00:05:42.382 "82b35afb-879b-5882-830b-77e3cf67dacb" 00:05:42.382 ], 00:05:42.382 "product_name": "passthru", 00:05:42.382 "block_size": 512, 00:05:42.382 "num_blocks": 16384, 00:05:42.382 "uuid": "82b35afb-879b-5882-830b-77e3cf67dacb", 00:05:42.382 "assigned_rate_limits": { 00:05:42.382 "rw_ios_per_sec": 0, 00:05:42.382 "rw_mbytes_per_sec": 0, 00:05:42.382 "r_mbytes_per_sec": 0, 00:05:42.382 "w_mbytes_per_sec": 0 00:05:42.382 }, 00:05:42.382 "claimed": false, 00:05:42.382 "zoned": false, 00:05:42.382 "supported_io_types": { 00:05:42.382 "read": true, 00:05:42.382 "write": true, 00:05:42.382 "unmap": true, 00:05:42.382 "write_zeroes": true, 00:05:42.382 "flush": true, 00:05:42.382 "reset": true, 00:05:42.382 "compare": false, 00:05:42.382 "compare_and_write": false, 00:05:42.382 "abort": true, 00:05:42.382 "nvme_admin": false, 00:05:42.382 "nvme_io": false 00:05:42.382 }, 00:05:42.382 "memory_domains": [ 00:05:42.382 { 00:05:42.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.382 "dma_device_type": 2 00:05:42.382 } 00:05:42.382 ], 00:05:42.382 "driver_specific": { 00:05:42.382 "passthru": { 00:05:42.382 "name": "Passthru0", 00:05:42.382 "base_bdev_name": "Malloc2" 00:05:42.382 } 00:05:42.382 } 00:05:42.382 } 00:05:42.382 ]' 00:05:42.382 15:31:03 -- rpc/rpc.sh@21 -- # jq length 00:05:42.382 15:31:03 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.382 15:31:03 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.382 15:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.382 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:42.382 15:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.382 15:31:03 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:42.382 15:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.382 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:42.640 15:31:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.640 15:31:04 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.640 15:31:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.640 15:31:04 -- common/autotest_common.sh@10 -- # set +x 00:05:42.640 15:31:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.640 15:31:04 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.640 15:31:04 -- rpc/rpc.sh@26 -- # jq length 00:05:42.640 ************************************ 00:05:42.640 END TEST rpc_daemon_integrity 00:05:42.640 ************************************ 00:05:42.640 15:31:04 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.640 00:05:42.640 real 0m0.365s 00:05:42.640 user 0m0.224s 00:05:42.640 sys 0m0.038s 00:05:42.640 15:31:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.640 15:31:04 -- common/autotest_common.sh@10 -- # set +x 00:05:42.640 15:31:04 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:42.640 15:31:04 -- rpc/rpc.sh@84 -- # killprocess 56437 00:05:42.640 15:31:04 -- common/autotest_common.sh@926 -- # '[' -z 56437 ']' 00:05:42.640 15:31:04 -- common/autotest_common.sh@930 -- # kill -0 56437 00:05:42.640 15:31:04 -- common/autotest_common.sh@931 -- # uname 00:05:42.640 15:31:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:42.640 15:31:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56437 00:05:42.640 killing process with pid 56437 00:05:42.640 15:31:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:42.640 15:31:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:42.640 15:31:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56437' 00:05:42.640 15:31:04 -- common/autotest_common.sh@945 -- # kill 56437 00:05:42.640 15:31:04 -- common/autotest_common.sh@950 -- # wait 56437 00:05:44.544 ************************************ 00:05:44.544 END TEST rpc 00:05:44.544 ************************************ 00:05:44.544 00:05:44.544 real 0m5.121s 00:05:44.544 user 0m6.085s 00:05:44.544 sys 0m0.766s 00:05:44.544 15:31:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.544 15:31:06 -- common/autotest_common.sh@10 -- # set +x 00:05:44.544 15:31:06 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:44.544 15:31:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.544 15:31:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.544 15:31:06 -- common/autotest_common.sh@10 -- # set +x 00:05:44.544 ************************************ 00:05:44.544 START TEST rpc_client 00:05:44.544 ************************************ 00:05:44.544 15:31:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:44.544 * Looking for test storage... 00:05:44.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:44.803 15:31:06 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:44.803 OK 00:05:44.803 15:31:06 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:44.803 00:05:44.803 real 0m0.132s 00:05:44.803 user 0m0.055s 00:05:44.803 sys 0m0.084s 00:05:44.803 15:31:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.803 15:31:06 -- common/autotest_common.sh@10 -- # set +x 00:05:44.803 ************************************ 00:05:44.803 END TEST rpc_client 00:05:44.803 ************************************ 00:05:44.803 15:31:06 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:44.803 15:31:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.803 15:31:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.803 15:31:06 -- common/autotest_common.sh@10 -- # set +x 00:05:44.803 ************************************ 00:05:44.803 START TEST json_config 00:05:44.803 ************************************ 00:05:44.803 15:31:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:44.803 15:31:06 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:44.803 15:31:06 -- nvmf/common.sh@7 -- # uname -s 00:05:44.803 15:31:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.803 15:31:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.803 15:31:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.803 15:31:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.803 15:31:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.803 15:31:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.803 15:31:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.803 15:31:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.803 15:31:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.803 15:31:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.803 15:31:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f241b322-a8d6-49ac-997e-35dd184b3295 00:05:44.803 15:31:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=f241b322-a8d6-49ac-997e-35dd184b3295 00:05:44.803 15:31:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.803 15:31:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.803 15:31:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:44.803 15:31:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:44.803 15:31:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.803 15:31:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.803 15:31:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.803 15:31:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.803 15:31:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.803 15:31:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.803 15:31:06 -- paths/export.sh@5 -- # export PATH 00:05:44.803 15:31:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.803 15:31:06 -- nvmf/common.sh@46 -- # : 0 00:05:44.803 15:31:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:44.803 15:31:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:44.803 15:31:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:44.803 15:31:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.803 15:31:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.803 15:31:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:44.803 15:31:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:44.803 15:31:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:44.803 15:31:06 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:44.803 15:31:06 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:44.803 15:31:06 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:44.803 15:31:06 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:44.803 15:31:06 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:44.803 WARNING: No tests are enabled so not running JSON configuration tests 00:05:44.804 15:31:06 -- json_config/json_config.sh@27 -- # exit 0 00:05:44.804 00:05:44.804 real 0m0.078s 00:05:44.804 user 0m0.042s 00:05:44.804 sys 0m0.032s 00:05:44.804 ************************************ 00:05:44.804 END TEST json_config 00:05:44.804 ************************************ 00:05:44.804 15:31:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.804 15:31:06 -- common/autotest_common.sh@10 -- # set +x 00:05:44.804 15:31:06 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:44.804 15:31:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.804 15:31:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.804 15:31:06 -- common/autotest_common.sh@10 -- # set +x 00:05:44.804 ************************************ 00:05:44.804 START TEST json_config_extra_key 00:05:44.804 ************************************ 00:05:44.804 15:31:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:45.062 15:31:06 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:45.062 15:31:06 -- nvmf/common.sh@7 -- # uname -s 00:05:45.062 15:31:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.062 15:31:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.062 15:31:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.062 15:31:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.062 15:31:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.062 15:31:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.062 15:31:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.062 15:31:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.062 15:31:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.062 15:31:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.062 15:31:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f241b322-a8d6-49ac-997e-35dd184b3295 00:05:45.062 15:31:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=f241b322-a8d6-49ac-997e-35dd184b3295 00:05:45.062 15:31:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.062 15:31:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.062 15:31:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:45.062 15:31:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:45.062 15:31:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.062 15:31:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.062 15:31:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.062 15:31:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.063 15:31:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.063 15:31:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.063 15:31:06 -- paths/export.sh@5 -- # export PATH 00:05:45.063 15:31:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.063 15:31:06 -- nvmf/common.sh@46 -- # : 0 00:05:45.063 15:31:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:45.063 15:31:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:45.063 15:31:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:45.063 15:31:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.063 15:31:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.063 15:31:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:45.063 15:31:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:45.063 15:31:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:45.063 15:31:06 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:45.063 15:31:06 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:45.063 15:31:06 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:45.063 15:31:06 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:45.063 15:31:06 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:45.063 15:31:06 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:45.063 15:31:06 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:45.063 15:31:06 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:45.063 15:31:06 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:45.063 15:31:06 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:45.063 INFO: launching applications... 00:05:45.063 15:31:06 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:45.063 15:31:06 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:45.063 15:31:06 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:45.063 15:31:06 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:45.063 15:31:06 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:45.063 Waiting for target to run... 00:05:45.063 15:31:06 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56731 00:05:45.063 15:31:06 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:45.063 15:31:06 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:45.063 15:31:06 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56731 /var/tmp/spdk_tgt.sock 00:05:45.063 15:31:06 -- common/autotest_common.sh@819 -- # '[' -z 56731 ']' 00:05:45.063 15:31:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:45.063 15:31:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:45.063 15:31:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:45.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:45.063 15:31:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:45.063 15:31:06 -- common/autotest_common.sh@10 -- # set +x 00:05:45.063 [2024-07-24 15:31:06.592860] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:45.063 [2024-07-24 15:31:06.593046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56731 ] 00:05:45.630 [2024-07-24 15:31:06.946523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.630 [2024-07-24 15:31:07.108420] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:45.630 [2024-07-24 15:31:07.108711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.006 00:05:47.006 INFO: shutting down applications... 00:05:47.006 15:31:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.006 15:31:08 -- common/autotest_common.sh@852 -- # return 0 00:05:47.006 15:31:08 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:47.006 15:31:08 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:47.006 15:31:08 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:47.006 15:31:08 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:47.006 15:31:08 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:47.006 15:31:08 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56731 ]] 00:05:47.006 15:31:08 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56731 00:05:47.006 15:31:08 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:47.006 15:31:08 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:47.006 15:31:08 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56731 00:05:47.006 15:31:08 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:47.265 15:31:08 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:47.265 15:31:08 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:47.265 15:31:08 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56731 00:05:47.265 15:31:08 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:47.874 15:31:09 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:47.874 15:31:09 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:47.874 15:31:09 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56731 00:05:47.874 15:31:09 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:48.440 15:31:09 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:48.440 15:31:09 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:48.440 15:31:09 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56731 00:05:48.440 15:31:09 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:49.005 15:31:10 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:49.005 15:31:10 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:49.005 15:31:10 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56731 00:05:49.005 15:31:10 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:49.571 15:31:10 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:49.571 15:31:10 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:49.571 15:31:10 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56731 00:05:49.571 15:31:10 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:49.572 SPDK target shutdown done 00:05:49.572 Success 00:05:49.572 15:31:10 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:49.572 15:31:10 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:49.572 15:31:10 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:49.572 15:31:10 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:49.572 00:05:49.572 real 0m4.480s 00:05:49.572 user 0m4.371s 00:05:49.572 sys 0m0.515s 00:05:49.572 ************************************ 00:05:49.572 END TEST json_config_extra_key 00:05:49.572 ************************************ 00:05:49.572 15:31:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.572 15:31:10 -- common/autotest_common.sh@10 -- # set +x 00:05:49.572 15:31:10 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:49.572 15:31:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.572 15:31:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.572 15:31:10 -- common/autotest_common.sh@10 -- # set +x 00:05:49.572 ************************************ 00:05:49.572 START TEST alias_rpc 00:05:49.572 ************************************ 00:05:49.572 15:31:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:49.572 * Looking for test storage... 00:05:49.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:49.572 15:31:10 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:49.572 15:31:10 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56835 00:05:49.572 15:31:10 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.572 15:31:10 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56835 00:05:49.572 15:31:10 -- common/autotest_common.sh@819 -- # '[' -z 56835 ']' 00:05:49.572 15:31:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.572 15:31:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:49.572 15:31:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.572 15:31:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:49.572 15:31:10 -- common/autotest_common.sh@10 -- # set +x 00:05:49.572 [2024-07-24 15:31:11.126589] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:49.572 [2024-07-24 15:31:11.126743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56835 ] 00:05:49.830 [2024-07-24 15:31:11.294478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.087 [2024-07-24 15:31:11.474001] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.087 [2024-07-24 15:31:11.474310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.462 15:31:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:51.462 15:31:12 -- common/autotest_common.sh@852 -- # return 0 00:05:51.462 15:31:12 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:51.462 15:31:13 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56835 00:05:51.462 15:31:13 -- common/autotest_common.sh@926 -- # '[' -z 56835 ']' 00:05:51.462 15:31:13 -- common/autotest_common.sh@930 -- # kill -0 56835 00:05:51.462 15:31:13 -- common/autotest_common.sh@931 -- # uname 00:05:51.462 15:31:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:51.462 15:31:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56835 00:05:51.462 killing process with pid 56835 00:05:51.462 15:31:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:51.462 15:31:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:51.462 15:31:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56835' 00:05:51.462 15:31:13 -- common/autotest_common.sh@945 -- # kill 56835 00:05:51.462 15:31:13 -- common/autotest_common.sh@950 -- # wait 56835 00:05:53.989 ************************************ 00:05:53.989 END TEST alias_rpc 00:05:53.989 ************************************ 00:05:53.989 00:05:53.989 real 0m4.154s 00:05:53.989 user 0m4.536s 00:05:53.989 sys 0m0.496s 00:05:53.989 15:31:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.989 15:31:15 -- common/autotest_common.sh@10 -- # set +x 00:05:53.989 15:31:15 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:53.989 15:31:15 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:53.989 15:31:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:53.989 15:31:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.989 15:31:15 -- common/autotest_common.sh@10 -- # set +x 00:05:53.989 ************************************ 00:05:53.989 START TEST spdkcli_tcp 00:05:53.989 ************************************ 00:05:53.989 15:31:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:53.989 * Looking for test storage... 00:05:53.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:53.989 15:31:15 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:53.989 15:31:15 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:53.989 15:31:15 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:53.989 15:31:15 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:53.989 15:31:15 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:53.989 15:31:15 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:53.989 15:31:15 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:53.989 15:31:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:53.989 15:31:15 -- common/autotest_common.sh@10 -- # set +x 00:05:53.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.989 15:31:15 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=56935 00:05:53.989 15:31:15 -- spdkcli/tcp.sh@27 -- # waitforlisten 56935 00:05:53.989 15:31:15 -- common/autotest_common.sh@819 -- # '[' -z 56935 ']' 00:05:53.989 15:31:15 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:53.989 15:31:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.989 15:31:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:53.989 15:31:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.989 15:31:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:53.989 15:31:15 -- common/autotest_common.sh@10 -- # set +x 00:05:53.989 [2024-07-24 15:31:15.322695] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:53.989 [2024-07-24 15:31:15.322859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56935 ] 00:05:53.989 [2024-07-24 15:31:15.496773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.248 [2024-07-24 15:31:15.728542] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:54.248 [2024-07-24 15:31:15.729013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.248 [2024-07-24 15:31:15.729042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.625 15:31:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:55.625 15:31:16 -- common/autotest_common.sh@852 -- # return 0 00:05:55.625 15:31:16 -- spdkcli/tcp.sh@31 -- # socat_pid=56965 00:05:55.625 15:31:16 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:55.625 15:31:16 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:55.625 [ 00:05:55.625 "bdev_malloc_delete", 00:05:55.625 "bdev_malloc_create", 00:05:55.625 "bdev_null_resize", 00:05:55.625 "bdev_null_delete", 00:05:55.625 "bdev_null_create", 00:05:55.625 "bdev_nvme_cuse_unregister", 00:05:55.625 "bdev_nvme_cuse_register", 00:05:55.625 "bdev_opal_new_user", 00:05:55.625 "bdev_opal_set_lock_state", 00:05:55.625 "bdev_opal_delete", 00:05:55.625 "bdev_opal_get_info", 00:05:55.625 "bdev_opal_create", 00:05:55.625 "bdev_nvme_opal_revert", 00:05:55.625 "bdev_nvme_opal_init", 00:05:55.625 "bdev_nvme_send_cmd", 00:05:55.625 "bdev_nvme_get_path_iostat", 00:05:55.625 "bdev_nvme_get_mdns_discovery_info", 00:05:55.625 "bdev_nvme_stop_mdns_discovery", 00:05:55.625 "bdev_nvme_start_mdns_discovery", 00:05:55.625 "bdev_nvme_set_multipath_policy", 00:05:55.625 "bdev_nvme_set_preferred_path", 00:05:55.625 "bdev_nvme_get_io_paths", 00:05:55.625 "bdev_nvme_remove_error_injection", 00:05:55.625 "bdev_nvme_add_error_injection", 00:05:55.625 "bdev_nvme_get_discovery_info", 00:05:55.625 "bdev_nvme_stop_discovery", 00:05:55.625 "bdev_nvme_start_discovery", 00:05:55.625 "bdev_nvme_get_controller_health_info", 00:05:55.625 "bdev_nvme_disable_controller", 00:05:55.625 "bdev_nvme_enable_controller", 00:05:55.625 "bdev_nvme_reset_controller", 00:05:55.625 "bdev_nvme_get_transport_statistics", 00:05:55.625 "bdev_nvme_apply_firmware", 00:05:55.625 "bdev_nvme_detach_controller", 00:05:55.625 "bdev_nvme_get_controllers", 00:05:55.625 "bdev_nvme_attach_controller", 00:05:55.625 "bdev_nvme_set_hotplug", 00:05:55.625 "bdev_nvme_set_options", 00:05:55.625 "bdev_passthru_delete", 00:05:55.625 "bdev_passthru_create", 00:05:55.625 "bdev_lvol_grow_lvstore", 00:05:55.625 "bdev_lvol_get_lvols", 00:05:55.625 "bdev_lvol_get_lvstores", 00:05:55.625 "bdev_lvol_delete", 00:05:55.625 "bdev_lvol_set_read_only", 00:05:55.625 "bdev_lvol_resize", 00:05:55.625 "bdev_lvol_decouple_parent", 00:05:55.625 "bdev_lvol_inflate", 00:05:55.625 "bdev_lvol_rename", 00:05:55.625 "bdev_lvol_clone_bdev", 00:05:55.625 "bdev_lvol_clone", 00:05:55.625 "bdev_lvol_snapshot", 00:05:55.625 "bdev_lvol_create", 00:05:55.625 "bdev_lvol_delete_lvstore", 00:05:55.625 "bdev_lvol_rename_lvstore", 00:05:55.625 "bdev_lvol_create_lvstore", 00:05:55.625 "bdev_raid_set_options", 00:05:55.625 "bdev_raid_remove_base_bdev", 00:05:55.625 "bdev_raid_add_base_bdev", 00:05:55.625 "bdev_raid_delete", 00:05:55.625 "bdev_raid_create", 00:05:55.625 "bdev_raid_get_bdevs", 00:05:55.625 "bdev_error_inject_error", 00:05:55.625 "bdev_error_delete", 00:05:55.625 "bdev_error_create", 00:05:55.625 "bdev_split_delete", 00:05:55.625 "bdev_split_create", 00:05:55.625 "bdev_delay_delete", 00:05:55.625 "bdev_delay_create", 00:05:55.625 "bdev_delay_update_latency", 00:05:55.625 "bdev_zone_block_delete", 00:05:55.625 "bdev_zone_block_create", 00:05:55.625 "blobfs_create", 00:05:55.625 "blobfs_detect", 00:05:55.625 "blobfs_set_cache_size", 00:05:55.625 "bdev_xnvme_delete", 00:05:55.625 "bdev_xnvme_create", 00:05:55.625 "bdev_aio_delete", 00:05:55.625 "bdev_aio_rescan", 00:05:55.625 "bdev_aio_create", 00:05:55.625 "bdev_ftl_set_property", 00:05:55.625 "bdev_ftl_get_properties", 00:05:55.625 "bdev_ftl_get_stats", 00:05:55.625 "bdev_ftl_unmap", 00:05:55.625 "bdev_ftl_unload", 00:05:55.625 "bdev_ftl_delete", 00:05:55.625 "bdev_ftl_load", 00:05:55.625 "bdev_ftl_create", 00:05:55.625 "bdev_virtio_attach_controller", 00:05:55.625 "bdev_virtio_scsi_get_devices", 00:05:55.625 "bdev_virtio_detach_controller", 00:05:55.625 "bdev_virtio_blk_set_hotplug", 00:05:55.625 "bdev_iscsi_delete", 00:05:55.625 "bdev_iscsi_create", 00:05:55.625 "bdev_iscsi_set_options", 00:05:55.625 "accel_error_inject_error", 00:05:55.625 "ioat_scan_accel_module", 00:05:55.625 "dsa_scan_accel_module", 00:05:55.625 "iaa_scan_accel_module", 00:05:55.625 "iscsi_set_options", 00:05:55.625 "iscsi_get_auth_groups", 00:05:55.625 "iscsi_auth_group_remove_secret", 00:05:55.625 "iscsi_auth_group_add_secret", 00:05:55.625 "iscsi_delete_auth_group", 00:05:55.625 "iscsi_create_auth_group", 00:05:55.625 "iscsi_set_discovery_auth", 00:05:55.625 "iscsi_get_options", 00:05:55.625 "iscsi_target_node_request_logout", 00:05:55.625 "iscsi_target_node_set_redirect", 00:05:55.625 "iscsi_target_node_set_auth", 00:05:55.625 "iscsi_target_node_add_lun", 00:05:55.625 "iscsi_get_connections", 00:05:55.625 "iscsi_portal_group_set_auth", 00:05:55.625 "iscsi_start_portal_group", 00:05:55.625 "iscsi_delete_portal_group", 00:05:55.625 "iscsi_create_portal_group", 00:05:55.625 "iscsi_get_portal_groups", 00:05:55.625 "iscsi_delete_target_node", 00:05:55.625 "iscsi_target_node_remove_pg_ig_maps", 00:05:55.625 "iscsi_target_node_add_pg_ig_maps", 00:05:55.625 "iscsi_create_target_node", 00:05:55.625 "iscsi_get_target_nodes", 00:05:55.625 "iscsi_delete_initiator_group", 00:05:55.625 "iscsi_initiator_group_remove_initiators", 00:05:55.625 "iscsi_initiator_group_add_initiators", 00:05:55.625 "iscsi_create_initiator_group", 00:05:55.625 "iscsi_get_initiator_groups", 00:05:55.625 "nvmf_set_crdt", 00:05:55.625 "nvmf_set_config", 00:05:55.625 "nvmf_set_max_subsystems", 00:05:55.625 "nvmf_subsystem_get_listeners", 00:05:55.625 "nvmf_subsystem_get_qpairs", 00:05:55.625 "nvmf_subsystem_get_controllers", 00:05:55.625 "nvmf_get_stats", 00:05:55.625 "nvmf_get_transports", 00:05:55.625 "nvmf_create_transport", 00:05:55.625 "nvmf_get_targets", 00:05:55.625 "nvmf_delete_target", 00:05:55.625 "nvmf_create_target", 00:05:55.625 "nvmf_subsystem_allow_any_host", 00:05:55.625 "nvmf_subsystem_remove_host", 00:05:55.625 "nvmf_subsystem_add_host", 00:05:55.625 "nvmf_subsystem_remove_ns", 00:05:55.625 "nvmf_subsystem_add_ns", 00:05:55.625 "nvmf_subsystem_listener_set_ana_state", 00:05:55.625 "nvmf_discovery_get_referrals", 00:05:55.625 "nvmf_discovery_remove_referral", 00:05:55.625 "nvmf_discovery_add_referral", 00:05:55.625 "nvmf_subsystem_remove_listener", 00:05:55.625 "nvmf_subsystem_add_listener", 00:05:55.625 "nvmf_delete_subsystem", 00:05:55.625 "nvmf_create_subsystem", 00:05:55.625 "nvmf_get_subsystems", 00:05:55.625 "env_dpdk_get_mem_stats", 00:05:55.625 "nbd_get_disks", 00:05:55.625 "nbd_stop_disk", 00:05:55.625 "nbd_start_disk", 00:05:55.625 "ublk_recover_disk", 00:05:55.625 "ublk_get_disks", 00:05:55.625 "ublk_stop_disk", 00:05:55.625 "ublk_start_disk", 00:05:55.625 "ublk_destroy_target", 00:05:55.625 "ublk_create_target", 00:05:55.625 "virtio_blk_create_transport", 00:05:55.625 "virtio_blk_get_transports", 00:05:55.625 "vhost_controller_set_coalescing", 00:05:55.625 "vhost_get_controllers", 00:05:55.625 "vhost_delete_controller", 00:05:55.625 "vhost_create_blk_controller", 00:05:55.625 "vhost_scsi_controller_remove_target", 00:05:55.625 "vhost_scsi_controller_add_target", 00:05:55.625 "vhost_start_scsi_controller", 00:05:55.625 "vhost_create_scsi_controller", 00:05:55.625 "thread_set_cpumask", 00:05:55.625 "framework_get_scheduler", 00:05:55.625 "framework_set_scheduler", 00:05:55.625 "framework_get_reactors", 00:05:55.625 "thread_get_io_channels", 00:05:55.625 "thread_get_pollers", 00:05:55.625 "thread_get_stats", 00:05:55.625 "framework_monitor_context_switch", 00:05:55.625 "spdk_kill_instance", 00:05:55.625 "log_enable_timestamps", 00:05:55.625 "log_get_flags", 00:05:55.625 "log_clear_flag", 00:05:55.625 "log_set_flag", 00:05:55.625 "log_get_level", 00:05:55.625 "log_set_level", 00:05:55.625 "log_get_print_level", 00:05:55.625 "log_set_print_level", 00:05:55.625 "framework_enable_cpumask_locks", 00:05:55.625 "framework_disable_cpumask_locks", 00:05:55.625 "framework_wait_init", 00:05:55.625 "framework_start_init", 00:05:55.625 "scsi_get_devices", 00:05:55.625 "bdev_get_histogram", 00:05:55.625 "bdev_enable_histogram", 00:05:55.625 "bdev_set_qos_limit", 00:05:55.625 "bdev_set_qd_sampling_period", 00:05:55.625 "bdev_get_bdevs", 00:05:55.625 "bdev_reset_iostat", 00:05:55.625 "bdev_get_iostat", 00:05:55.625 "bdev_examine", 00:05:55.625 "bdev_wait_for_examine", 00:05:55.625 "bdev_set_options", 00:05:55.625 "notify_get_notifications", 00:05:55.625 "notify_get_types", 00:05:55.625 "accel_get_stats", 00:05:55.625 "accel_set_options", 00:05:55.625 "accel_set_driver", 00:05:55.625 "accel_crypto_key_destroy", 00:05:55.625 "accel_crypto_keys_get", 00:05:55.625 "accel_crypto_key_create", 00:05:55.625 "accel_assign_opc", 00:05:55.625 "accel_get_module_info", 00:05:55.625 "accel_get_opc_assignments", 00:05:55.625 "vmd_rescan", 00:05:55.625 "vmd_remove_device", 00:05:55.625 "vmd_enable", 00:05:55.626 "sock_set_default_impl", 00:05:55.626 "sock_impl_set_options", 00:05:55.626 "sock_impl_get_options", 00:05:55.626 "iobuf_get_stats", 00:05:55.626 "iobuf_set_options", 00:05:55.626 "framework_get_pci_devices", 00:05:55.626 "framework_get_config", 00:05:55.626 "framework_get_subsystems", 00:05:55.626 "trace_get_info", 00:05:55.626 "trace_get_tpoint_group_mask", 00:05:55.626 "trace_disable_tpoint_group", 00:05:55.626 "trace_enable_tpoint_group", 00:05:55.626 "trace_clear_tpoint_mask", 00:05:55.626 "trace_set_tpoint_mask", 00:05:55.626 "spdk_get_version", 00:05:55.626 "rpc_get_methods" 00:05:55.626 ] 00:05:55.626 15:31:17 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:55.626 15:31:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:55.626 15:31:17 -- common/autotest_common.sh@10 -- # set +x 00:05:55.884 15:31:17 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:55.884 15:31:17 -- spdkcli/tcp.sh@38 -- # killprocess 56935 00:05:55.884 15:31:17 -- common/autotest_common.sh@926 -- # '[' -z 56935 ']' 00:05:55.884 15:31:17 -- common/autotest_common.sh@930 -- # kill -0 56935 00:05:55.884 15:31:17 -- common/autotest_common.sh@931 -- # uname 00:05:55.884 15:31:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:55.884 15:31:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56935 00:05:55.884 killing process with pid 56935 00:05:55.884 15:31:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:55.884 15:31:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:55.884 15:31:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56935' 00:05:55.884 15:31:17 -- common/autotest_common.sh@945 -- # kill 56935 00:05:55.884 15:31:17 -- common/autotest_common.sh@950 -- # wait 56935 00:05:57.785 ************************************ 00:05:57.785 END TEST spdkcli_tcp 00:05:57.785 ************************************ 00:05:57.785 00:05:57.785 real 0m4.168s 00:05:57.785 user 0m7.674s 00:05:57.785 sys 0m0.532s 00:05:57.785 15:31:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.785 15:31:19 -- common/autotest_common.sh@10 -- # set +x 00:05:57.785 15:31:19 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:57.785 15:31:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:57.785 15:31:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.785 15:31:19 -- common/autotest_common.sh@10 -- # set +x 00:05:57.785 ************************************ 00:05:57.785 START TEST dpdk_mem_utility 00:05:57.785 ************************************ 00:05:57.785 15:31:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:58.044 * Looking for test storage... 00:05:58.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:58.044 15:31:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:58.044 15:31:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57050 00:05:58.044 15:31:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:58.044 15:31:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57050 00:05:58.044 15:31:19 -- common/autotest_common.sh@819 -- # '[' -z 57050 ']' 00:05:58.044 15:31:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.044 15:31:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:58.044 15:31:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.044 15:31:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:58.044 15:31:19 -- common/autotest_common.sh@10 -- # set +x 00:05:58.044 [2024-07-24 15:31:19.556992] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:58.044 [2024-07-24 15:31:19.557346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57050 ] 00:05:58.303 [2024-07-24 15:31:19.717987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.562 [2024-07-24 15:31:19.944814] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:58.562 [2024-07-24 15:31:19.945288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.978 15:31:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:59.978 15:31:21 -- common/autotest_common.sh@852 -- # return 0 00:05:59.978 15:31:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:59.978 15:31:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:59.978 15:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.978 15:31:21 -- common/autotest_common.sh@10 -- # set +x 00:05:59.978 { 00:05:59.978 "filename": "/tmp/spdk_mem_dump.txt" 00:05:59.978 } 00:05:59.978 15:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.978 15:31:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:59.978 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:59.978 1 heaps totaling size 820.000000 MiB 00:05:59.978 size: 820.000000 MiB heap id: 0 00:05:59.978 end heaps---------- 00:05:59.978 8 mempools totaling size 598.116089 MiB 00:05:59.978 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:59.978 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:59.978 size: 84.521057 MiB name: bdev_io_57050 00:05:59.978 size: 51.011292 MiB name: evtpool_57050 00:05:59.978 size: 50.003479 MiB name: msgpool_57050 00:05:59.978 size: 21.763794 MiB name: PDU_Pool 00:05:59.978 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:59.978 size: 0.026123 MiB name: Session_Pool 00:05:59.978 end mempools------- 00:05:59.978 6 memzones totaling size 4.142822 MiB 00:05:59.978 size: 1.000366 MiB name: RG_ring_0_57050 00:05:59.978 size: 1.000366 MiB name: RG_ring_1_57050 00:05:59.978 size: 1.000366 MiB name: RG_ring_4_57050 00:05:59.978 size: 1.000366 MiB name: RG_ring_5_57050 00:05:59.978 size: 0.125366 MiB name: RG_ring_2_57050 00:05:59.978 size: 0.015991 MiB name: RG_ring_3_57050 00:05:59.978 end memzones------- 00:05:59.978 15:31:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:59.978 heap id: 0 total size: 820.000000 MiB number of busy elements: 303 number of free elements: 18 00:05:59.978 list of free elements. size: 18.450806 MiB 00:05:59.978 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:59.978 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:59.978 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:59.978 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:59.978 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:59.978 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:59.978 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:59.978 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:59.978 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:59.978 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:59.978 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:59.978 element at address: 0x200000200000 with size: 0.829224 MiB 00:05:59.978 element at address: 0x20001b000000 with size: 0.564148 MiB 00:05:59.978 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:59.978 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:59.978 element at address: 0x200013800000 with size: 0.467896 MiB 00:05:59.978 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:59.978 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:59.978 list of standard malloc elements. size: 199.284790 MiB 00:05:59.978 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:59.978 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:59.978 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:59.978 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:59.978 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:59.979 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:59.979 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:59.979 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:59.979 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:59.979 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:59.979 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:59.979 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:59.979 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:59.980 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:59.980 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:59.980 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:59.980 list of memzone associated elements. size: 602.264404 MiB 00:05:59.980 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:59.980 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:59.980 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:59.980 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:59.981 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:59.981 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_57050_0 00:05:59.981 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:59.981 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57050_0 00:05:59.981 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:59.981 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57050_0 00:05:59.981 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:59.981 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:59.981 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:59.981 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:59.981 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:59.981 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57050 00:05:59.981 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:59.981 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57050 00:05:59.981 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:59.981 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57050 00:05:59.981 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:59.981 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:59.981 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:59.981 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:59.981 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:59.981 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:59.981 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:59.981 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:59.981 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:59.981 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57050 00:05:59.981 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:59.981 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57050 00:05:59.981 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:59.981 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57050 00:05:59.981 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:59.981 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57050 00:05:59.981 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:59.981 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57050 00:05:59.981 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:59.981 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:59.981 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:59.981 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:59.981 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:59.981 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:59.981 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:59.981 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57050 00:05:59.981 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:59.981 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:59.981 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:59.981 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:59.981 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:59.981 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57050 00:05:59.981 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:59.981 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:59.981 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:59.981 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57050 00:05:59.981 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:59.981 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57050 00:05:59.981 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:59.981 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:59.981 15:31:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:59.981 15:31:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57050 00:05:59.981 15:31:21 -- common/autotest_common.sh@926 -- # '[' -z 57050 ']' 00:05:59.981 15:31:21 -- common/autotest_common.sh@930 -- # kill -0 57050 00:05:59.981 15:31:21 -- common/autotest_common.sh@931 -- # uname 00:05:59.981 15:31:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:59.981 15:31:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57050 00:05:59.981 killing process with pid 57050 00:05:59.981 15:31:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:59.981 15:31:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:59.981 15:31:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57050' 00:05:59.981 15:31:21 -- common/autotest_common.sh@945 -- # kill 57050 00:05:59.981 15:31:21 -- common/autotest_common.sh@950 -- # wait 57050 00:06:01.884 00:06:01.884 real 0m3.845s 00:06:01.884 user 0m4.153s 00:06:01.884 sys 0m0.476s 00:06:01.884 15:31:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.884 15:31:23 -- common/autotest_common.sh@10 -- # set +x 00:06:01.884 ************************************ 00:06:01.884 END TEST dpdk_mem_utility 00:06:01.884 ************************************ 00:06:01.884 15:31:23 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:01.884 15:31:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:01.884 15:31:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.884 15:31:23 -- common/autotest_common.sh@10 -- # set +x 00:06:01.884 ************************************ 00:06:01.884 START TEST event 00:06:01.884 ************************************ 00:06:01.884 15:31:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:01.884 * Looking for test storage... 00:06:01.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:01.884 15:31:23 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:01.884 15:31:23 -- bdev/nbd_common.sh@6 -- # set -e 00:06:01.884 15:31:23 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:01.884 15:31:23 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:01.884 15:31:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.884 15:31:23 -- common/autotest_common.sh@10 -- # set +x 00:06:01.884 ************************************ 00:06:01.884 START TEST event_perf 00:06:01.884 ************************************ 00:06:01.884 15:31:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:01.884 Running I/O for 1 seconds...[2024-07-24 15:31:23.368491] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:01.884 [2024-07-24 15:31:23.368796] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57151 ] 00:06:02.142 [2024-07-24 15:31:23.536690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:02.142 [2024-07-24 15:31:23.705176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.142 [2024-07-24 15:31:23.705305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.142 [2024-07-24 15:31:23.705428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.142 [2024-07-24 15:31:23.705442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.517 Running I/O for 1 seconds... 00:06:03.517 lcore 0: 192929 00:06:03.517 lcore 1: 192929 00:06:03.517 lcore 2: 192929 00:06:03.517 lcore 3: 192929 00:06:03.517 done. 00:06:03.517 00:06:03.517 real 0m1.734s 00:06:03.517 user 0m4.494s 00:06:03.517 ************************************ 00:06:03.517 END TEST event_perf 00:06:03.517 ************************************ 00:06:03.517 sys 0m0.115s 00:06:03.517 15:31:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.517 15:31:25 -- common/autotest_common.sh@10 -- # set +x 00:06:03.517 15:31:25 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:03.517 15:31:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:03.517 15:31:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.517 15:31:25 -- common/autotest_common.sh@10 -- # set +x 00:06:03.775 ************************************ 00:06:03.775 START TEST event_reactor 00:06:03.775 ************************************ 00:06:03.775 15:31:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:03.775 [2024-07-24 15:31:25.156707] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:03.775 [2024-07-24 15:31:25.156843] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57196 ] 00:06:03.775 [2024-07-24 15:31:25.314976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.034 [2024-07-24 15:31:25.473659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.430 test_start 00:06:05.430 oneshot 00:06:05.430 tick 100 00:06:05.430 tick 100 00:06:05.430 tick 250 00:06:05.430 tick 100 00:06:05.430 tick 100 00:06:05.430 tick 100 00:06:05.430 tick 250 00:06:05.430 tick 500 00:06:05.430 tick 100 00:06:05.430 tick 100 00:06:05.430 tick 250 00:06:05.430 tick 100 00:06:05.430 tick 100 00:06:05.430 test_end 00:06:05.430 00:06:05.430 real 0m1.663s 00:06:05.430 user 0m1.478s 00:06:05.430 sys 0m0.075s 00:06:05.430 ************************************ 00:06:05.430 END TEST event_reactor 00:06:05.430 ************************************ 00:06:05.430 15:31:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.430 15:31:26 -- common/autotest_common.sh@10 -- # set +x 00:06:05.430 15:31:26 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.430 15:31:26 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:05.430 15:31:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.430 15:31:26 -- common/autotest_common.sh@10 -- # set +x 00:06:05.430 ************************************ 00:06:05.430 START TEST event_reactor_perf 00:06:05.430 ************************************ 00:06:05.430 15:31:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.430 [2024-07-24 15:31:26.879110] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:05.430 [2024-07-24 15:31:26.879294] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57227 ] 00:06:05.689 [2024-07-24 15:31:27.051898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.689 [2024-07-24 15:31:27.271955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.063 test_start 00:06:07.063 test_end 00:06:07.063 Performance: 309848 events per second 00:06:07.063 00:06:07.063 real 0m1.741s 00:06:07.063 user 0m1.534s 00:06:07.063 sys 0m0.097s 00:06:07.063 15:31:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.063 15:31:28 -- common/autotest_common.sh@10 -- # set +x 00:06:07.063 ************************************ 00:06:07.063 END TEST event_reactor_perf 00:06:07.063 ************************************ 00:06:07.063 15:31:28 -- event/event.sh@49 -- # uname -s 00:06:07.063 15:31:28 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:07.063 15:31:28 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:07.063 15:31:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:07.063 15:31:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.063 15:31:28 -- common/autotest_common.sh@10 -- # set +x 00:06:07.063 ************************************ 00:06:07.063 START TEST event_scheduler 00:06:07.063 ************************************ 00:06:07.063 15:31:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:07.321 * Looking for test storage... 00:06:07.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:07.321 15:31:28 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:07.321 15:31:28 -- scheduler/scheduler.sh@35 -- # scheduler_pid=57294 00:06:07.321 15:31:28 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.321 15:31:28 -- scheduler/scheduler.sh@37 -- # waitforlisten 57294 00:06:07.321 15:31:28 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:07.321 15:31:28 -- common/autotest_common.sh@819 -- # '[' -z 57294 ']' 00:06:07.321 15:31:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.321 15:31:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:07.321 15:31:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.321 15:31:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:07.321 15:31:28 -- common/autotest_common.sh@10 -- # set +x 00:06:07.321 [2024-07-24 15:31:28.790176] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:07.321 [2024-07-24 15:31:28.790339] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57294 ] 00:06:07.580 [2024-07-24 15:31:28.954003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.580 [2024-07-24 15:31:29.167009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.580 [2024-07-24 15:31:29.167164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.580 [2024-07-24 15:31:29.167592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.580 [2024-07-24 15:31:29.167608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.514 15:31:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.514 15:31:29 -- common/autotest_common.sh@852 -- # return 0 00:06:08.514 15:31:29 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:08.514 15:31:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.514 15:31:29 -- common/autotest_common.sh@10 -- # set +x 00:06:08.514 POWER: Env isn't set yet! 00:06:08.514 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:08.514 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.514 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.514 POWER: Attempting to initialise PSTAT power management... 00:06:08.514 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.514 POWER: Cannot set governor of lcore 0 to performance 00:06:08.514 POWER: Attempting to initialise AMD PSTATE power management... 00:06:08.514 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.514 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.514 POWER: Attempting to initialise CPPC power management... 00:06:08.514 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.514 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.514 POWER: Attempting to initialise VM power management... 00:06:08.514 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:08.514 POWER: Unable to set Power Management Environment for lcore 0 00:06:08.515 [2024-07-24 15:31:29.781223] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:08.515 [2024-07-24 15:31:29.781248] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:08.515 [2024-07-24 15:31:29.781273] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:08.515 [2024-07-24 15:31:29.781295] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:08.515 [2024-07-24 15:31:29.781312] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:08.515 [2024-07-24 15:31:29.781324] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:08.515 15:31:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.515 15:31:29 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:08.515 15:31:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.515 15:31:29 -- common/autotest_common.sh@10 -- # set +x 00:06:08.515 [2024-07-24 15:31:30.029501] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:08.515 15:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.515 15:31:30 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:08.515 15:31:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.515 15:31:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.515 15:31:30 -- common/autotest_common.sh@10 -- # set +x 00:06:08.515 ************************************ 00:06:08.515 START TEST scheduler_create_thread 00:06:08.515 ************************************ 00:06:08.515 15:31:30 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:08.515 15:31:30 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:08.515 15:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.515 15:31:30 -- common/autotest_common.sh@10 -- # set +x 00:06:08.515 2 00:06:08.515 15:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.515 15:31:30 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:08.515 15:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.515 15:31:30 -- common/autotest_common.sh@10 -- # set +x 00:06:08.515 3 00:06:08.515 15:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.515 15:31:30 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:08.515 15:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.515 15:31:30 -- common/autotest_common.sh@10 -- # set +x 00:06:08.515 4 00:06:08.515 15:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.515 15:31:30 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:08.515 15:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.515 15:31:30 -- common/autotest_common.sh@10 -- # set +x 00:06:08.515 5 00:06:08.515 15:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.515 15:31:30 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:08.515 15:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.515 15:31:30 -- common/autotest_common.sh@10 -- # set +x 00:06:08.515 6 00:06:08.515 15:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.515 15:31:30 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:08.515 15:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.515 15:31:30 -- common/autotest_common.sh@10 -- # set +x 00:06:08.515 7 00:06:08.515 15:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.515 15:31:30 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:08.515 15:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.515 15:31:30 -- common/autotest_common.sh@10 -- # set +x 00:06:08.515 8 00:06:08.515 15:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.515 15:31:30 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:08.515 15:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.515 15:31:30 -- common/autotest_common.sh@10 -- # set +x 00:06:08.515 9 00:06:08.515 15:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.515 15:31:30 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:08.515 15:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.773 15:31:30 -- common/autotest_common.sh@10 -- # set +x 00:06:08.773 10 00:06:08.773 15:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.773 15:31:30 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:08.774 15:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.774 15:31:30 -- common/autotest_common.sh@10 -- # set +x 00:06:08.774 15:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.774 15:31:30 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:08.774 15:31:30 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:08.774 15:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.774 15:31:30 -- common/autotest_common.sh@10 -- # set +x 00:06:08.774 15:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.774 15:31:30 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:08.774 15:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.774 15:31:30 -- common/autotest_common.sh@10 -- # set +x 00:06:09.709 15:31:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:09.709 15:31:31 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:09.709 15:31:31 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:09.709 15:31:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:09.709 15:31:31 -- common/autotest_common.sh@10 -- # set +x 00:06:10.669 15:31:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:10.669 00:06:10.669 real 0m2.138s 00:06:10.669 user 0m0.017s 00:06:10.669 sys 0m0.007s 00:06:10.669 15:31:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.669 15:31:32 -- common/autotest_common.sh@10 -- # set +x 00:06:10.669 ************************************ 00:06:10.669 END TEST scheduler_create_thread 00:06:10.669 ************************************ 00:06:10.669 15:31:32 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:10.669 15:31:32 -- scheduler/scheduler.sh@46 -- # killprocess 57294 00:06:10.669 15:31:32 -- common/autotest_common.sh@926 -- # '[' -z 57294 ']' 00:06:10.669 15:31:32 -- common/autotest_common.sh@930 -- # kill -0 57294 00:06:10.669 15:31:32 -- common/autotest_common.sh@931 -- # uname 00:06:10.669 15:31:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:10.669 15:31:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57294 00:06:10.669 15:31:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:10.669 15:31:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:10.669 killing process with pid 57294 00:06:10.669 15:31:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57294' 00:06:10.669 15:31:32 -- common/autotest_common.sh@945 -- # kill 57294 00:06:10.669 15:31:32 -- common/autotest_common.sh@950 -- # wait 57294 00:06:11.235 [2024-07-24 15:31:32.658879] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:12.170 00:06:12.170 real 0m5.024s 00:06:12.170 user 0m8.671s 00:06:12.170 sys 0m0.371s 00:06:12.170 15:31:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.170 15:31:33 -- common/autotest_common.sh@10 -- # set +x 00:06:12.170 ************************************ 00:06:12.170 END TEST event_scheduler 00:06:12.170 ************************************ 00:06:12.170 15:31:33 -- event/event.sh@51 -- # modprobe -n nbd 00:06:12.170 15:31:33 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:12.170 15:31:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.170 15:31:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.170 15:31:33 -- common/autotest_common.sh@10 -- # set +x 00:06:12.170 ************************************ 00:06:12.170 START TEST app_repeat 00:06:12.170 ************************************ 00:06:12.170 15:31:33 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:12.170 15:31:33 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.170 15:31:33 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.170 15:31:33 -- event/event.sh@13 -- # local nbd_list 00:06:12.170 15:31:33 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.170 15:31:33 -- event/event.sh@14 -- # local bdev_list 00:06:12.170 15:31:33 -- event/event.sh@15 -- # local repeat_times=4 00:06:12.170 15:31:33 -- event/event.sh@17 -- # modprobe nbd 00:06:12.170 15:31:33 -- event/event.sh@19 -- # repeat_pid=57400 00:06:12.170 15:31:33 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:12.170 15:31:33 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.170 Process app_repeat pid: 57400 00:06:12.170 15:31:33 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57400' 00:06:12.170 15:31:33 -- event/event.sh@23 -- # for i in {0..2} 00:06:12.170 spdk_app_start Round 0 00:06:12.170 15:31:33 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:12.170 15:31:33 -- event/event.sh@25 -- # waitforlisten 57400 /var/tmp/spdk-nbd.sock 00:06:12.170 15:31:33 -- common/autotest_common.sh@819 -- # '[' -z 57400 ']' 00:06:12.170 15:31:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.170 15:31:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:12.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.170 15:31:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.170 15:31:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:12.170 15:31:33 -- common/autotest_common.sh@10 -- # set +x 00:06:12.429 [2024-07-24 15:31:33.774937] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:12.429 [2024-07-24 15:31:33.775151] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57400 ] 00:06:12.429 [2024-07-24 15:31:33.946718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.688 [2024-07-24 15:31:34.147043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.688 [2024-07-24 15:31:34.147054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.254 15:31:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:13.254 15:31:34 -- common/autotest_common.sh@852 -- # return 0 00:06:13.254 15:31:34 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.512 Malloc0 00:06:13.770 15:31:35 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.029 Malloc1 00:06:14.029 15:31:35 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.029 15:31:35 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.029 15:31:35 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.029 15:31:35 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.029 15:31:35 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.029 15:31:35 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.029 15:31:35 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.029 15:31:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.029 15:31:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.029 15:31:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.029 15:31:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.029 15:31:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.029 15:31:35 -- bdev/nbd_common.sh@12 -- # local i 00:06:14.029 15:31:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.029 15:31:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.029 15:31:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.288 /dev/nbd0 00:06:14.288 15:31:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.288 15:31:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.288 15:31:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:14.288 15:31:35 -- common/autotest_common.sh@857 -- # local i 00:06:14.288 15:31:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:14.288 15:31:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:14.288 15:31:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:14.288 15:31:35 -- common/autotest_common.sh@861 -- # break 00:06:14.288 15:31:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:14.288 15:31:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:14.288 15:31:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.288 1+0 records in 00:06:14.288 1+0 records out 00:06:14.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253915 s, 16.1 MB/s 00:06:14.288 15:31:35 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.288 15:31:35 -- common/autotest_common.sh@874 -- # size=4096 00:06:14.288 15:31:35 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.288 15:31:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:14.288 15:31:35 -- common/autotest_common.sh@877 -- # return 0 00:06:14.288 15:31:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.288 15:31:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.288 15:31:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.547 /dev/nbd1 00:06:14.547 15:31:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.547 15:31:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.547 15:31:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:14.547 15:31:35 -- common/autotest_common.sh@857 -- # local i 00:06:14.547 15:31:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:14.547 15:31:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:14.547 15:31:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:14.547 15:31:36 -- common/autotest_common.sh@861 -- # break 00:06:14.547 15:31:36 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:14.547 15:31:36 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:14.547 15:31:36 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.547 1+0 records in 00:06:14.547 1+0 records out 00:06:14.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352055 s, 11.6 MB/s 00:06:14.547 15:31:36 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.547 15:31:36 -- common/autotest_common.sh@874 -- # size=4096 00:06:14.547 15:31:36 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.547 15:31:36 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:14.547 15:31:36 -- common/autotest_common.sh@877 -- # return 0 00:06:14.547 15:31:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.547 15:31:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.547 15:31:36 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.547 15:31:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.547 15:31:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.806 { 00:06:14.806 "nbd_device": "/dev/nbd0", 00:06:14.806 "bdev_name": "Malloc0" 00:06:14.806 }, 00:06:14.806 { 00:06:14.806 "nbd_device": "/dev/nbd1", 00:06:14.806 "bdev_name": "Malloc1" 00:06:14.806 } 00:06:14.806 ]' 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.806 { 00:06:14.806 "nbd_device": "/dev/nbd0", 00:06:14.806 "bdev_name": "Malloc0" 00:06:14.806 }, 00:06:14.806 { 00:06:14.806 "nbd_device": "/dev/nbd1", 00:06:14.806 "bdev_name": "Malloc1" 00:06:14.806 } 00:06:14.806 ]' 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.806 /dev/nbd1' 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.806 /dev/nbd1' 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.806 256+0 records in 00:06:14.806 256+0 records out 00:06:14.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00851791 s, 123 MB/s 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.806 256+0 records in 00:06:14.806 256+0 records out 00:06:14.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286055 s, 36.7 MB/s 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.806 15:31:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.806 256+0 records in 00:06:14.806 256+0 records out 00:06:14.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0387278 s, 27.1 MB/s 00:06:15.065 15:31:36 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:15.065 15:31:36 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.065 15:31:36 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.065 15:31:36 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.065 15:31:36 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.065 15:31:36 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.065 15:31:36 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.065 15:31:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.065 15:31:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.065 15:31:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.065 15:31:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:15.065 15:31:36 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.065 15:31:36 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:15.065 15:31:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.065 15:31:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.065 15:31:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.065 15:31:36 -- bdev/nbd_common.sh@51 -- # local i 00:06:15.065 15:31:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.065 15:31:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.324 15:31:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.324 15:31:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.324 15:31:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.324 15:31:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.324 15:31:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.324 15:31:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.324 15:31:36 -- bdev/nbd_common.sh@41 -- # break 00:06:15.324 15:31:36 -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.324 15:31:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.324 15:31:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.582 15:31:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.582 15:31:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.582 15:31:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.582 15:31:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.582 15:31:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.582 15:31:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.582 15:31:36 -- bdev/nbd_common.sh@41 -- # break 00:06:15.582 15:31:36 -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.582 15:31:36 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.582 15:31:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.582 15:31:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.841 15:31:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.841 15:31:37 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.841 15:31:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.841 15:31:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.841 15:31:37 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.841 15:31:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.841 15:31:37 -- bdev/nbd_common.sh@65 -- # true 00:06:15.841 15:31:37 -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.841 15:31:37 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.841 15:31:37 -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.841 15:31:37 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.841 15:31:37 -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.841 15:31:37 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.408 15:31:37 -- event/event.sh@35 -- # sleep 3 00:06:17.344 [2024-07-24 15:31:38.744553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.344 [2024-07-24 15:31:38.903854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.345 [2024-07-24 15:31:38.903862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.603 [2024-07-24 15:31:39.051016] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.603 [2024-07-24 15:31:39.051138] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:19.505 15:31:40 -- event/event.sh@23 -- # for i in {0..2} 00:06:19.505 spdk_app_start Round 1 00:06:19.505 15:31:40 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:19.505 15:31:40 -- event/event.sh@25 -- # waitforlisten 57400 /var/tmp/spdk-nbd.sock 00:06:19.505 15:31:40 -- common/autotest_common.sh@819 -- # '[' -z 57400 ']' 00:06:19.505 15:31:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.505 15:31:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.505 15:31:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.505 15:31:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.505 15:31:40 -- common/autotest_common.sh@10 -- # set +x 00:06:19.505 15:31:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:19.505 15:31:40 -- common/autotest_common.sh@852 -- # return 0 00:06:19.505 15:31:40 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.764 Malloc0 00:06:19.764 15:31:41 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.022 Malloc1 00:06:20.280 15:31:41 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.280 15:31:41 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.280 15:31:41 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.280 15:31:41 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.280 15:31:41 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.280 15:31:41 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.280 15:31:41 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.280 15:31:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.280 15:31:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.280 15:31:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.280 15:31:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.280 15:31:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.280 15:31:41 -- bdev/nbd_common.sh@12 -- # local i 00:06:20.280 15:31:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.280 15:31:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.280 15:31:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:20.280 /dev/nbd0 00:06:20.538 15:31:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.538 15:31:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.538 15:31:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:20.538 15:31:41 -- common/autotest_common.sh@857 -- # local i 00:06:20.538 15:31:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:20.538 15:31:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:20.538 15:31:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:20.538 15:31:41 -- common/autotest_common.sh@861 -- # break 00:06:20.538 15:31:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:20.538 15:31:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:20.538 15:31:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.538 1+0 records in 00:06:20.538 1+0 records out 00:06:20.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310786 s, 13.2 MB/s 00:06:20.538 15:31:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.538 15:31:41 -- common/autotest_common.sh@874 -- # size=4096 00:06:20.538 15:31:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.538 15:31:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:20.538 15:31:41 -- common/autotest_common.sh@877 -- # return 0 00:06:20.538 15:31:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.538 15:31:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.538 15:31:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:20.797 /dev/nbd1 00:06:20.797 15:31:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:20.797 15:31:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:20.797 15:31:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:20.797 15:31:42 -- common/autotest_common.sh@857 -- # local i 00:06:20.797 15:31:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:20.797 15:31:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:20.797 15:31:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:20.797 15:31:42 -- common/autotest_common.sh@861 -- # break 00:06:20.797 15:31:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:20.797 15:31:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:20.797 15:31:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.797 1+0 records in 00:06:20.797 1+0 records out 00:06:20.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284332 s, 14.4 MB/s 00:06:20.797 15:31:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.797 15:31:42 -- common/autotest_common.sh@874 -- # size=4096 00:06:20.797 15:31:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.797 15:31:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:20.797 15:31:42 -- common/autotest_common.sh@877 -- # return 0 00:06:20.797 15:31:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.797 15:31:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.797 15:31:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.797 15:31:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.797 15:31:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.055 { 00:06:21.055 "nbd_device": "/dev/nbd0", 00:06:21.055 "bdev_name": "Malloc0" 00:06:21.055 }, 00:06:21.055 { 00:06:21.055 "nbd_device": "/dev/nbd1", 00:06:21.055 "bdev_name": "Malloc1" 00:06:21.055 } 00:06:21.055 ]' 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.055 { 00:06:21.055 "nbd_device": "/dev/nbd0", 00:06:21.055 "bdev_name": "Malloc0" 00:06:21.055 }, 00:06:21.055 { 00:06:21.055 "nbd_device": "/dev/nbd1", 00:06:21.055 "bdev_name": "Malloc1" 00:06:21.055 } 00:06:21.055 ]' 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.055 /dev/nbd1' 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.055 /dev/nbd1' 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.055 256+0 records in 00:06:21.055 256+0 records out 00:06:21.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00825575 s, 127 MB/s 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.055 256+0 records in 00:06:21.055 256+0 records out 00:06:21.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257998 s, 40.6 MB/s 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.055 256+0 records in 00:06:21.055 256+0 records out 00:06:21.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299192 s, 35.0 MB/s 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@51 -- # local i 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.055 15:31:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.329 15:31:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.329 15:31:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.329 15:31:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.329 15:31:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.329 15:31:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.329 15:31:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.329 15:31:42 -- bdev/nbd_common.sh@41 -- # break 00:06:21.329 15:31:42 -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.329 15:31:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.329 15:31:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.598 15:31:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.598 15:31:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.598 15:31:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.598 15:31:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.598 15:31:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.598 15:31:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.598 15:31:43 -- bdev/nbd_common.sh@41 -- # break 00:06:21.598 15:31:43 -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.598 15:31:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.598 15:31:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.598 15:31:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.858 15:31:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.858 15:31:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.858 15:31:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.858 15:31:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.858 15:31:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.858 15:31:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.858 15:31:43 -- bdev/nbd_common.sh@65 -- # true 00:06:21.858 15:31:43 -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.858 15:31:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.858 15:31:43 -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.858 15:31:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.858 15:31:43 -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.858 15:31:43 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.423 15:31:43 -- event/event.sh@35 -- # sleep 3 00:06:23.354 [2024-07-24 15:31:44.753443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.354 [2024-07-24 15:31:44.924864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.354 [2024-07-24 15:31:44.924863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.611 [2024-07-24 15:31:45.068552] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.611 [2024-07-24 15:31:45.068666] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.513 spdk_app_start Round 2 00:06:25.513 15:31:46 -- event/event.sh@23 -- # for i in {0..2} 00:06:25.513 15:31:46 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:25.513 15:31:46 -- event/event.sh@25 -- # waitforlisten 57400 /var/tmp/spdk-nbd.sock 00:06:25.513 15:31:46 -- common/autotest_common.sh@819 -- # '[' -z 57400 ']' 00:06:25.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.513 15:31:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.513 15:31:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:25.513 15:31:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.513 15:31:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:25.513 15:31:46 -- common/autotest_common.sh@10 -- # set +x 00:06:25.513 15:31:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:25.513 15:31:47 -- common/autotest_common.sh@852 -- # return 0 00:06:25.513 15:31:47 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.771 Malloc0 00:06:25.771 15:31:47 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.030 Malloc1 00:06:26.030 15:31:47 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.030 15:31:47 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.030 15:31:47 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.030 15:31:47 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.030 15:31:47 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.030 15:31:47 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.030 15:31:47 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.030 15:31:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.030 15:31:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.030 15:31:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.030 15:31:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.030 15:31:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.030 15:31:47 -- bdev/nbd_common.sh@12 -- # local i 00:06:26.030 15:31:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.030 15:31:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.030 15:31:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.287 /dev/nbd0 00:06:26.287 15:31:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.287 15:31:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.287 15:31:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:26.287 15:31:47 -- common/autotest_common.sh@857 -- # local i 00:06:26.287 15:31:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:26.287 15:31:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:26.287 15:31:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:26.287 15:31:47 -- common/autotest_common.sh@861 -- # break 00:06:26.287 15:31:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:26.287 15:31:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:26.287 15:31:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.287 1+0 records in 00:06:26.287 1+0 records out 00:06:26.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406849 s, 10.1 MB/s 00:06:26.287 15:31:47 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.287 15:31:47 -- common/autotest_common.sh@874 -- # size=4096 00:06:26.287 15:31:47 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.287 15:31:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:26.287 15:31:47 -- common/autotest_common.sh@877 -- # return 0 00:06:26.287 15:31:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.287 15:31:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.287 15:31:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.545 /dev/nbd1 00:06:26.545 15:31:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.545 15:31:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.545 15:31:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:26.545 15:31:48 -- common/autotest_common.sh@857 -- # local i 00:06:26.545 15:31:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:26.545 15:31:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:26.545 15:31:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:26.545 15:31:48 -- common/autotest_common.sh@861 -- # break 00:06:26.545 15:31:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:26.545 15:31:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:26.545 15:31:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.545 1+0 records in 00:06:26.545 1+0 records out 00:06:26.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441552 s, 9.3 MB/s 00:06:26.545 15:31:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.545 15:31:48 -- common/autotest_common.sh@874 -- # size=4096 00:06:26.545 15:31:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.805 15:31:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:26.805 15:31:48 -- common/autotest_common.sh@877 -- # return 0 00:06:26.805 15:31:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.805 15:31:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.805 15:31:48 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.805 15:31:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.805 15:31:48 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.064 15:31:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.064 { 00:06:27.064 "nbd_device": "/dev/nbd0", 00:06:27.064 "bdev_name": "Malloc0" 00:06:27.064 }, 00:06:27.064 { 00:06:27.064 "nbd_device": "/dev/nbd1", 00:06:27.064 "bdev_name": "Malloc1" 00:06:27.064 } 00:06:27.064 ]' 00:06:27.064 15:31:48 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.064 { 00:06:27.064 "nbd_device": "/dev/nbd0", 00:06:27.064 "bdev_name": "Malloc0" 00:06:27.064 }, 00:06:27.064 { 00:06:27.064 "nbd_device": "/dev/nbd1", 00:06:27.064 "bdev_name": "Malloc1" 00:06:27.064 } 00:06:27.064 ]' 00:06:27.064 15:31:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.064 15:31:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.064 /dev/nbd1' 00:06:27.064 15:31:48 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.064 /dev/nbd1' 00:06:27.064 15:31:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.064 15:31:48 -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.064 15:31:48 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.065 256+0 records in 00:06:27.065 256+0 records out 00:06:27.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00937045 s, 112 MB/s 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.065 256+0 records in 00:06:27.065 256+0 records out 00:06:27.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233477 s, 44.9 MB/s 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.065 256+0 records in 00:06:27.065 256+0 records out 00:06:27.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302842 s, 34.6 MB/s 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@51 -- # local i 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.065 15:31:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.323 15:31:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.323 15:31:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.323 15:31:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.323 15:31:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.323 15:31:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.323 15:31:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.323 15:31:48 -- bdev/nbd_common.sh@41 -- # break 00:06:27.323 15:31:48 -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.323 15:31:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.323 15:31:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.581 15:31:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.581 15:31:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.581 15:31:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.581 15:31:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.581 15:31:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.581 15:31:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.581 15:31:49 -- bdev/nbd_common.sh@41 -- # break 00:06:27.581 15:31:49 -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.581 15:31:49 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.581 15:31:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.581 15:31:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.839 15:31:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.839 15:31:49 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.839 15:31:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.098 15:31:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.098 15:31:49 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.098 15:31:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.098 15:31:49 -- bdev/nbd_common.sh@65 -- # true 00:06:28.098 15:31:49 -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.098 15:31:49 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.098 15:31:49 -- bdev/nbd_common.sh@104 -- # count=0 00:06:28.098 15:31:49 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:28.098 15:31:49 -- bdev/nbd_common.sh@109 -- # return 0 00:06:28.098 15:31:49 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.357 15:31:49 -- event/event.sh@35 -- # sleep 3 00:06:29.291 [2024-07-24 15:31:50.878231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.550 [2024-07-24 15:31:51.036325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.550 [2024-07-24 15:31:51.036331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.808 [2024-07-24 15:31:51.200967] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.808 [2024-07-24 15:31:51.201050] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.711 15:31:52 -- event/event.sh@38 -- # waitforlisten 57400 /var/tmp/spdk-nbd.sock 00:06:31.711 15:31:52 -- common/autotest_common.sh@819 -- # '[' -z 57400 ']' 00:06:31.711 15:31:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.711 15:31:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:31.711 15:31:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.711 15:31:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:31.711 15:31:52 -- common/autotest_common.sh@10 -- # set +x 00:06:31.711 15:31:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.711 15:31:53 -- common/autotest_common.sh@852 -- # return 0 00:06:31.711 15:31:53 -- event/event.sh@39 -- # killprocess 57400 00:06:31.711 15:31:53 -- common/autotest_common.sh@926 -- # '[' -z 57400 ']' 00:06:31.711 15:31:53 -- common/autotest_common.sh@930 -- # kill -0 57400 00:06:31.711 15:31:53 -- common/autotest_common.sh@931 -- # uname 00:06:31.711 15:31:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:31.711 15:31:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57400 00:06:31.711 killing process with pid 57400 00:06:31.711 15:31:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:31.711 15:31:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:31.711 15:31:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57400' 00:06:31.711 15:31:53 -- common/autotest_common.sh@945 -- # kill 57400 00:06:31.711 15:31:53 -- common/autotest_common.sh@950 -- # wait 57400 00:06:32.646 spdk_app_start is called in Round 0. 00:06:32.646 Shutdown signal received, stop current app iteration 00:06:32.646 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:32.646 spdk_app_start is called in Round 1. 00:06:32.646 Shutdown signal received, stop current app iteration 00:06:32.646 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:32.646 spdk_app_start is called in Round 2. 00:06:32.646 Shutdown signal received, stop current app iteration 00:06:32.646 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:32.646 spdk_app_start is called in Round 3. 00:06:32.646 Shutdown signal received, stop current app iteration 00:06:32.646 15:31:54 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:32.646 15:31:54 -- event/event.sh@42 -- # return 0 00:06:32.646 00:06:32.646 real 0m20.352s 00:06:32.646 user 0m44.262s 00:06:32.646 sys 0m2.685s 00:06:32.646 ************************************ 00:06:32.646 END TEST app_repeat 00:06:32.646 ************************************ 00:06:32.646 15:31:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.646 15:31:54 -- common/autotest_common.sh@10 -- # set +x 00:06:32.646 15:31:54 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:32.646 15:31:54 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:32.646 15:31:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:32.646 15:31:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.646 15:31:54 -- common/autotest_common.sh@10 -- # set +x 00:06:32.646 ************************************ 00:06:32.646 START TEST cpu_locks 00:06:32.646 ************************************ 00:06:32.646 15:31:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:32.646 * Looking for test storage... 00:06:32.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:32.646 15:31:54 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:32.646 15:31:54 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:32.646 15:31:54 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:32.646 15:31:54 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:32.646 15:31:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:32.646 15:31:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.646 15:31:54 -- common/autotest_common.sh@10 -- # set +x 00:06:32.646 ************************************ 00:06:32.646 START TEST default_locks 00:06:32.646 ************************************ 00:06:32.646 15:31:54 -- common/autotest_common.sh@1104 -- # default_locks 00:06:32.646 15:31:54 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57850 00:06:32.646 15:31:54 -- event/cpu_locks.sh@47 -- # waitforlisten 57850 00:06:32.646 15:31:54 -- common/autotest_common.sh@819 -- # '[' -z 57850 ']' 00:06:32.646 15:31:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.646 15:31:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:32.646 15:31:54 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.646 15:31:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.646 15:31:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:32.646 15:31:54 -- common/autotest_common.sh@10 -- # set +x 00:06:32.904 [2024-07-24 15:31:54.330848] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:32.904 [2024-07-24 15:31:54.331027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57850 ] 00:06:33.162 [2024-07-24 15:31:54.502467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.162 [2024-07-24 15:31:54.659780] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.162 [2024-07-24 15:31:54.660007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.539 15:31:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:34.539 15:31:55 -- common/autotest_common.sh@852 -- # return 0 00:06:34.539 15:31:55 -- event/cpu_locks.sh@49 -- # locks_exist 57850 00:06:34.539 15:31:55 -- event/cpu_locks.sh@22 -- # lslocks -p 57850 00:06:34.539 15:31:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.798 15:31:56 -- event/cpu_locks.sh@50 -- # killprocess 57850 00:06:34.798 15:31:56 -- common/autotest_common.sh@926 -- # '[' -z 57850 ']' 00:06:34.798 15:31:56 -- common/autotest_common.sh@930 -- # kill -0 57850 00:06:34.798 15:31:56 -- common/autotest_common.sh@931 -- # uname 00:06:34.798 15:31:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:34.798 15:31:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57850 00:06:34.798 killing process with pid 57850 00:06:34.798 15:31:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:34.798 15:31:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:34.798 15:31:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57850' 00:06:34.798 15:31:56 -- common/autotest_common.sh@945 -- # kill 57850 00:06:34.798 15:31:56 -- common/autotest_common.sh@950 -- # wait 57850 00:06:36.702 15:31:58 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57850 00:06:36.702 15:31:58 -- common/autotest_common.sh@640 -- # local es=0 00:06:36.702 15:31:58 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 57850 00:06:36.702 15:31:58 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:36.702 15:31:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:36.702 15:31:58 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:36.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.702 ERROR: process (pid: 57850) is no longer running 00:06:36.702 15:31:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:36.702 15:31:58 -- common/autotest_common.sh@643 -- # waitforlisten 57850 00:06:36.702 15:31:58 -- common/autotest_common.sh@819 -- # '[' -z 57850 ']' 00:06:36.702 15:31:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.702 15:31:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.702 15:31:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.702 15:31:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.702 15:31:58 -- common/autotest_common.sh@10 -- # set +x 00:06:36.702 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (57850) - No such process 00:06:36.702 15:31:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.702 15:31:58 -- common/autotest_common.sh@852 -- # return 1 00:06:36.702 15:31:58 -- common/autotest_common.sh@643 -- # es=1 00:06:36.702 15:31:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:36.702 15:31:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:36.702 15:31:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:36.702 15:31:58 -- event/cpu_locks.sh@54 -- # no_locks 00:06:36.702 15:31:58 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:36.702 15:31:58 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:36.702 15:31:58 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:36.702 00:06:36.702 real 0m3.892s 00:06:36.702 user 0m4.163s 00:06:36.702 sys 0m0.520s 00:06:36.702 15:31:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.702 15:31:58 -- common/autotest_common.sh@10 -- # set +x 00:06:36.702 ************************************ 00:06:36.702 END TEST default_locks 00:06:36.702 ************************************ 00:06:36.702 15:31:58 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:36.702 15:31:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:36.702 15:31:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.702 15:31:58 -- common/autotest_common.sh@10 -- # set +x 00:06:36.702 ************************************ 00:06:36.702 START TEST default_locks_via_rpc 00:06:36.702 ************************************ 00:06:36.702 15:31:58 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:36.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.702 15:31:58 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57924 00:06:36.702 15:31:58 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.702 15:31:58 -- event/cpu_locks.sh@63 -- # waitforlisten 57924 00:06:36.702 15:31:58 -- common/autotest_common.sh@819 -- # '[' -z 57924 ']' 00:06:36.702 15:31:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.702 15:31:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.702 15:31:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.702 15:31:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.702 15:31:58 -- common/autotest_common.sh@10 -- # set +x 00:06:36.702 [2024-07-24 15:31:58.272822] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:36.702 [2024-07-24 15:31:58.272980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57924 ] 00:06:36.961 [2024-07-24 15:31:58.439803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.220 [2024-07-24 15:31:58.609778] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.220 [2024-07-24 15:31:58.610001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.606 15:31:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:38.607 15:31:59 -- common/autotest_common.sh@852 -- # return 0 00:06:38.607 15:31:59 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:38.607 15:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:38.607 15:31:59 -- common/autotest_common.sh@10 -- # set +x 00:06:38.607 15:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:38.607 15:31:59 -- event/cpu_locks.sh@67 -- # no_locks 00:06:38.607 15:31:59 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:38.607 15:31:59 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:38.607 15:31:59 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:38.607 15:31:59 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:38.607 15:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:38.607 15:31:59 -- common/autotest_common.sh@10 -- # set +x 00:06:38.607 15:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:38.607 15:31:59 -- event/cpu_locks.sh@71 -- # locks_exist 57924 00:06:38.607 15:31:59 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.607 15:31:59 -- event/cpu_locks.sh@22 -- # lslocks -p 57924 00:06:38.866 15:32:00 -- event/cpu_locks.sh@73 -- # killprocess 57924 00:06:38.866 15:32:00 -- common/autotest_common.sh@926 -- # '[' -z 57924 ']' 00:06:38.866 15:32:00 -- common/autotest_common.sh@930 -- # kill -0 57924 00:06:38.866 15:32:00 -- common/autotest_common.sh@931 -- # uname 00:06:38.866 15:32:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:38.866 15:32:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57924 00:06:38.866 killing process with pid 57924 00:06:38.866 15:32:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:38.866 15:32:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:38.866 15:32:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57924' 00:06:38.866 15:32:00 -- common/autotest_common.sh@945 -- # kill 57924 00:06:38.866 15:32:00 -- common/autotest_common.sh@950 -- # wait 57924 00:06:40.790 00:06:40.790 real 0m3.943s 00:06:40.790 user 0m4.247s 00:06:40.790 sys 0m0.542s 00:06:40.790 15:32:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.790 15:32:02 -- common/autotest_common.sh@10 -- # set +x 00:06:40.790 ************************************ 00:06:40.790 END TEST default_locks_via_rpc 00:06:40.790 ************************************ 00:06:40.790 15:32:02 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:40.790 15:32:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:40.790 15:32:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.790 15:32:02 -- common/autotest_common.sh@10 -- # set +x 00:06:40.790 ************************************ 00:06:40.790 START TEST non_locking_app_on_locked_coremask 00:06:40.790 ************************************ 00:06:40.790 15:32:02 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:40.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.790 15:32:02 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58003 00:06:40.790 15:32:02 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.790 15:32:02 -- event/cpu_locks.sh@81 -- # waitforlisten 58003 /var/tmp/spdk.sock 00:06:40.790 15:32:02 -- common/autotest_common.sh@819 -- # '[' -z 58003 ']' 00:06:40.790 15:32:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.790 15:32:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:40.790 15:32:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.790 15:32:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:40.790 15:32:02 -- common/autotest_common.sh@10 -- # set +x 00:06:40.790 [2024-07-24 15:32:02.266504] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:40.790 [2024-07-24 15:32:02.266674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58003 ] 00:06:41.048 [2024-07-24 15:32:02.439904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.307 [2024-07-24 15:32:02.660569] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:41.307 [2024-07-24 15:32:02.660808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.242 15:32:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:42.242 15:32:03 -- common/autotest_common.sh@852 -- # return 0 00:06:42.242 15:32:03 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:42.243 15:32:03 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58032 00:06:42.243 15:32:03 -- event/cpu_locks.sh@85 -- # waitforlisten 58032 /var/tmp/spdk2.sock 00:06:42.243 15:32:03 -- common/autotest_common.sh@819 -- # '[' -z 58032 ']' 00:06:42.243 15:32:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.243 15:32:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:42.243 15:32:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.243 15:32:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:42.243 15:32:03 -- common/autotest_common.sh@10 -- # set +x 00:06:42.502 [2024-07-24 15:32:03.889638] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:42.502 [2024-07-24 15:32:03.890026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58032 ] 00:06:42.502 [2024-07-24 15:32:04.056563] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.502 [2024-07-24 15:32:04.056656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.068 [2024-07-24 15:32:04.391435] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:43.068 [2024-07-24 15:32:04.391709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.970 15:32:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:44.970 15:32:06 -- common/autotest_common.sh@852 -- # return 0 00:06:44.970 15:32:06 -- event/cpu_locks.sh@87 -- # locks_exist 58003 00:06:44.970 15:32:06 -- event/cpu_locks.sh@22 -- # lslocks -p 58003 00:06:44.970 15:32:06 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.539 15:32:07 -- event/cpu_locks.sh@89 -- # killprocess 58003 00:06:45.539 15:32:07 -- common/autotest_common.sh@926 -- # '[' -z 58003 ']' 00:06:45.539 15:32:07 -- common/autotest_common.sh@930 -- # kill -0 58003 00:06:45.539 15:32:07 -- common/autotest_common.sh@931 -- # uname 00:06:45.539 15:32:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:45.539 15:32:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58003 00:06:45.539 killing process with pid 58003 00:06:45.539 15:32:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:45.539 15:32:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:45.539 15:32:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58003' 00:06:45.539 15:32:07 -- common/autotest_common.sh@945 -- # kill 58003 00:06:45.539 15:32:07 -- common/autotest_common.sh@950 -- # wait 58003 00:06:49.729 15:32:10 -- event/cpu_locks.sh@90 -- # killprocess 58032 00:06:49.729 15:32:10 -- common/autotest_common.sh@926 -- # '[' -z 58032 ']' 00:06:49.729 15:32:10 -- common/autotest_common.sh@930 -- # kill -0 58032 00:06:49.729 15:32:10 -- common/autotest_common.sh@931 -- # uname 00:06:49.729 15:32:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:49.729 15:32:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58032 00:06:49.729 killing process with pid 58032 00:06:49.729 15:32:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:49.729 15:32:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:49.729 15:32:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58032' 00:06:49.729 15:32:10 -- common/autotest_common.sh@945 -- # kill 58032 00:06:49.729 15:32:10 -- common/autotest_common.sh@950 -- # wait 58032 00:06:51.668 00:06:51.668 real 0m10.794s 00:06:51.668 user 0m11.676s 00:06:51.668 sys 0m1.239s 00:06:51.668 15:32:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.668 ************************************ 00:06:51.668 END TEST non_locking_app_on_locked_coremask 00:06:51.668 ************************************ 00:06:51.668 15:32:12 -- common/autotest_common.sh@10 -- # set +x 00:06:51.668 15:32:12 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:51.668 15:32:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:51.668 15:32:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.668 15:32:12 -- common/autotest_common.sh@10 -- # set +x 00:06:51.668 ************************************ 00:06:51.668 START TEST locking_app_on_unlocked_coremask 00:06:51.668 ************************************ 00:06:51.668 15:32:13 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:51.668 15:32:13 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58166 00:06:51.668 15:32:13 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:51.668 15:32:13 -- event/cpu_locks.sh@99 -- # waitforlisten 58166 /var/tmp/spdk.sock 00:06:51.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.668 15:32:13 -- common/autotest_common.sh@819 -- # '[' -z 58166 ']' 00:06:51.668 15:32:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.668 15:32:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:51.668 15:32:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.668 15:32:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:51.668 15:32:13 -- common/autotest_common.sh@10 -- # set +x 00:06:51.668 [2024-07-24 15:32:13.121697] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:51.668 [2024-07-24 15:32:13.122016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58166 ] 00:06:51.935 [2024-07-24 15:32:13.293144] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.935 [2024-07-24 15:32:13.293593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.935 [2024-07-24 15:32:13.472478] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:51.935 [2024-07-24 15:32:13.472718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.573 15:32:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:53.573 15:32:14 -- common/autotest_common.sh@852 -- # return 0 00:06:53.573 15:32:14 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58189 00:06:53.573 15:32:14 -- event/cpu_locks.sh@103 -- # waitforlisten 58189 /var/tmp/spdk2.sock 00:06:53.573 15:32:14 -- common/autotest_common.sh@819 -- # '[' -z 58189 ']' 00:06:53.573 15:32:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.573 15:32:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:53.573 15:32:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.573 15:32:14 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:53.573 15:32:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:53.573 15:32:14 -- common/autotest_common.sh@10 -- # set +x 00:06:53.573 [2024-07-24 15:32:14.861161] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:53.573 [2024-07-24 15:32:14.861287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58189 ] 00:06:53.573 [2024-07-24 15:32:15.028986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.831 [2024-07-24 15:32:15.372051] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:53.831 [2024-07-24 15:32:15.372322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.733 15:32:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:55.733 15:32:17 -- common/autotest_common.sh@852 -- # return 0 00:06:55.733 15:32:17 -- event/cpu_locks.sh@105 -- # locks_exist 58189 00:06:55.733 15:32:17 -- event/cpu_locks.sh@22 -- # lslocks -p 58189 00:06:55.733 15:32:17 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.683 15:32:17 -- event/cpu_locks.sh@107 -- # killprocess 58166 00:06:56.684 15:32:17 -- common/autotest_common.sh@926 -- # '[' -z 58166 ']' 00:06:56.684 15:32:17 -- common/autotest_common.sh@930 -- # kill -0 58166 00:06:56.684 15:32:17 -- common/autotest_common.sh@931 -- # uname 00:06:56.684 15:32:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:56.684 15:32:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58166 00:06:56.684 killing process with pid 58166 00:06:56.684 15:32:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:56.684 15:32:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:56.684 15:32:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58166' 00:06:56.684 15:32:18 -- common/autotest_common.sh@945 -- # kill 58166 00:06:56.684 15:32:18 -- common/autotest_common.sh@950 -- # wait 58166 00:07:00.869 15:32:21 -- event/cpu_locks.sh@108 -- # killprocess 58189 00:07:00.869 15:32:21 -- common/autotest_common.sh@926 -- # '[' -z 58189 ']' 00:07:00.869 15:32:21 -- common/autotest_common.sh@930 -- # kill -0 58189 00:07:00.869 15:32:21 -- common/autotest_common.sh@931 -- # uname 00:07:00.869 15:32:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:00.869 15:32:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58189 00:07:00.869 killing process with pid 58189 00:07:00.869 15:32:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:00.869 15:32:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:00.869 15:32:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58189' 00:07:00.869 15:32:21 -- common/autotest_common.sh@945 -- # kill 58189 00:07:00.869 15:32:21 -- common/autotest_common.sh@950 -- # wait 58189 00:07:02.769 ************************************ 00:07:02.769 END TEST locking_app_on_unlocked_coremask 00:07:02.769 ************************************ 00:07:02.769 00:07:02.769 real 0m10.865s 00:07:02.769 user 0m11.862s 00:07:02.769 sys 0m1.215s 00:07:02.769 15:32:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.769 15:32:23 -- common/autotest_common.sh@10 -- # set +x 00:07:02.769 15:32:23 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:02.769 15:32:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:02.769 15:32:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.769 15:32:23 -- common/autotest_common.sh@10 -- # set +x 00:07:02.769 ************************************ 00:07:02.769 START TEST locking_app_on_locked_coremask 00:07:02.769 ************************************ 00:07:02.769 15:32:23 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:07:02.769 15:32:23 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58328 00:07:02.769 15:32:23 -- event/cpu_locks.sh@116 -- # waitforlisten 58328 /var/tmp/spdk.sock 00:07:02.769 15:32:23 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.769 15:32:23 -- common/autotest_common.sh@819 -- # '[' -z 58328 ']' 00:07:02.769 15:32:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.769 15:32:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:02.769 15:32:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.769 15:32:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:02.769 15:32:23 -- common/autotest_common.sh@10 -- # set +x 00:07:02.769 [2024-07-24 15:32:24.020575] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:02.769 [2024-07-24 15:32:24.021296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58328 ] 00:07:02.769 [2024-07-24 15:32:24.180568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.769 [2024-07-24 15:32:24.363494] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:02.769 [2024-07-24 15:32:24.363784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.144 15:32:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:04.144 15:32:25 -- common/autotest_common.sh@852 -- # return 0 00:07:04.144 15:32:25 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58356 00:07:04.144 15:32:25 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:04.144 15:32:25 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58356 /var/tmp/spdk2.sock 00:07:04.144 15:32:25 -- common/autotest_common.sh@640 -- # local es=0 00:07:04.144 15:32:25 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 58356 /var/tmp/spdk2.sock 00:07:04.144 15:32:25 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:04.144 15:32:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:04.144 15:32:25 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:04.144 15:32:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:04.144 15:32:25 -- common/autotest_common.sh@643 -- # waitforlisten 58356 /var/tmp/spdk2.sock 00:07:04.144 15:32:25 -- common/autotest_common.sh@819 -- # '[' -z 58356 ']' 00:07:04.144 15:32:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.144 15:32:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:04.144 15:32:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.144 15:32:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:04.144 15:32:25 -- common/autotest_common.sh@10 -- # set +x 00:07:04.402 [2024-07-24 15:32:25.819961] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:04.402 [2024-07-24 15:32:25.820146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58356 ] 00:07:04.402 [2024-07-24 15:32:25.988966] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58328 has claimed it. 00:07:04.402 [2024-07-24 15:32:25.989044] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:04.967 ERROR: process (pid: 58356) is no longer running 00:07:04.967 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (58356) - No such process 00:07:04.967 15:32:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:04.967 15:32:26 -- common/autotest_common.sh@852 -- # return 1 00:07:04.967 15:32:26 -- common/autotest_common.sh@643 -- # es=1 00:07:04.967 15:32:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:04.967 15:32:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:04.967 15:32:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:04.967 15:32:26 -- event/cpu_locks.sh@122 -- # locks_exist 58328 00:07:04.967 15:32:26 -- event/cpu_locks.sh@22 -- # lslocks -p 58328 00:07:04.967 15:32:26 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.534 15:32:26 -- event/cpu_locks.sh@124 -- # killprocess 58328 00:07:05.534 15:32:26 -- common/autotest_common.sh@926 -- # '[' -z 58328 ']' 00:07:05.534 15:32:26 -- common/autotest_common.sh@930 -- # kill -0 58328 00:07:05.534 15:32:26 -- common/autotest_common.sh@931 -- # uname 00:07:05.534 15:32:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:05.534 15:32:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58328 00:07:05.534 15:32:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:05.534 killing process with pid 58328 00:07:05.534 15:32:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:05.534 15:32:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58328' 00:07:05.534 15:32:26 -- common/autotest_common.sh@945 -- # kill 58328 00:07:05.534 15:32:26 -- common/autotest_common.sh@950 -- # wait 58328 00:07:07.435 00:07:07.435 real 0m5.003s 00:07:07.435 user 0m5.656s 00:07:07.435 sys 0m0.726s 00:07:07.435 15:32:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.435 ************************************ 00:07:07.435 END TEST locking_app_on_locked_coremask 00:07:07.435 ************************************ 00:07:07.435 15:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:07.435 15:32:28 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:07.435 15:32:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:07.435 15:32:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.435 15:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:07.435 ************************************ 00:07:07.435 START TEST locking_overlapped_coremask 00:07:07.435 ************************************ 00:07:07.435 15:32:28 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:07:07.435 15:32:28 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58421 00:07:07.435 15:32:28 -- event/cpu_locks.sh@133 -- # waitforlisten 58421 /var/tmp/spdk.sock 00:07:07.435 15:32:28 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:07.435 15:32:28 -- common/autotest_common.sh@819 -- # '[' -z 58421 ']' 00:07:07.435 15:32:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.436 15:32:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:07.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.436 15:32:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.436 15:32:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:07.436 15:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:07.694 [2024-07-24 15:32:29.092299] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:07.694 [2024-07-24 15:32:29.092466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58421 ] 00:07:07.694 [2024-07-24 15:32:29.263875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.952 [2024-07-24 15:32:29.441725] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:07.952 [2024-07-24 15:32:29.442149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.952 [2024-07-24 15:32:29.442240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.952 [2024-07-24 15:32:29.442244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.350 15:32:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:09.350 15:32:30 -- common/autotest_common.sh@852 -- # return 0 00:07:09.350 15:32:30 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58447 00:07:09.350 15:32:30 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58447 /var/tmp/spdk2.sock 00:07:09.350 15:32:30 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:09.350 15:32:30 -- common/autotest_common.sh@640 -- # local es=0 00:07:09.350 15:32:30 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 58447 /var/tmp/spdk2.sock 00:07:09.350 15:32:30 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:09.350 15:32:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:09.350 15:32:30 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:09.350 15:32:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:09.350 15:32:30 -- common/autotest_common.sh@643 -- # waitforlisten 58447 /var/tmp/spdk2.sock 00:07:09.350 15:32:30 -- common/autotest_common.sh@819 -- # '[' -z 58447 ']' 00:07:09.350 15:32:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.350 15:32:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:09.350 15:32:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.350 15:32:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:09.350 15:32:30 -- common/autotest_common.sh@10 -- # set +x 00:07:09.350 [2024-07-24 15:32:30.885805] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:09.350 [2024-07-24 15:32:30.886549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58447 ] 00:07:09.608 [2024-07-24 15:32:31.082540] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58421 has claimed it. 00:07:09.608 [2024-07-24 15:32:31.082607] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:10.174 ERROR: process (pid: 58447) is no longer running 00:07:10.174 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (58447) - No such process 00:07:10.174 15:32:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:10.174 15:32:31 -- common/autotest_common.sh@852 -- # return 1 00:07:10.174 15:32:31 -- common/autotest_common.sh@643 -- # es=1 00:07:10.174 15:32:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:10.174 15:32:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:10.174 15:32:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:10.174 15:32:31 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:10.174 15:32:31 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:10.174 15:32:31 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:10.174 15:32:31 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:10.174 15:32:31 -- event/cpu_locks.sh@141 -- # killprocess 58421 00:07:10.174 15:32:31 -- common/autotest_common.sh@926 -- # '[' -z 58421 ']' 00:07:10.174 15:32:31 -- common/autotest_common.sh@930 -- # kill -0 58421 00:07:10.174 15:32:31 -- common/autotest_common.sh@931 -- # uname 00:07:10.174 15:32:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:10.174 15:32:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58421 00:07:10.174 15:32:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:10.174 15:32:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:10.174 killing process with pid 58421 00:07:10.174 15:32:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58421' 00:07:10.174 15:32:31 -- common/autotest_common.sh@945 -- # kill 58421 00:07:10.174 15:32:31 -- common/autotest_common.sh@950 -- # wait 58421 00:07:12.075 00:07:12.075 real 0m4.586s 00:07:12.075 user 0m12.604s 00:07:12.075 sys 0m0.556s 00:07:12.075 15:32:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.075 15:32:33 -- common/autotest_common.sh@10 -- # set +x 00:07:12.075 ************************************ 00:07:12.075 END TEST locking_overlapped_coremask 00:07:12.075 ************************************ 00:07:12.075 15:32:33 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:12.075 15:32:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:12.075 15:32:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.075 15:32:33 -- common/autotest_common.sh@10 -- # set +x 00:07:12.075 ************************************ 00:07:12.075 START TEST locking_overlapped_coremask_via_rpc 00:07:12.075 ************************************ 00:07:12.075 15:32:33 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:07:12.075 15:32:33 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58511 00:07:12.075 15:32:33 -- event/cpu_locks.sh@149 -- # waitforlisten 58511 /var/tmp/spdk.sock 00:07:12.075 15:32:33 -- common/autotest_common.sh@819 -- # '[' -z 58511 ']' 00:07:12.075 15:32:33 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:12.075 15:32:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.075 15:32:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:12.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.075 15:32:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.075 15:32:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:12.075 15:32:33 -- common/autotest_common.sh@10 -- # set +x 00:07:12.333 [2024-07-24 15:32:33.724481] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:12.333 [2024-07-24 15:32:33.725065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58511 ] 00:07:12.333 [2024-07-24 15:32:33.896730] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:12.333 [2024-07-24 15:32:33.896804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.591 [2024-07-24 15:32:34.072394] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:12.591 [2024-07-24 15:32:34.072843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.591 [2024-07-24 15:32:34.072918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.591 [2024-07-24 15:32:34.072901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.968 15:32:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:13.968 15:32:35 -- common/autotest_common.sh@852 -- # return 0 00:07:13.968 15:32:35 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58543 00:07:13.968 15:32:35 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:13.968 15:32:35 -- event/cpu_locks.sh@153 -- # waitforlisten 58543 /var/tmp/spdk2.sock 00:07:13.968 15:32:35 -- common/autotest_common.sh@819 -- # '[' -z 58543 ']' 00:07:13.968 15:32:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.968 15:32:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:13.968 15:32:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.968 15:32:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:13.968 15:32:35 -- common/autotest_common.sh@10 -- # set +x 00:07:13.968 [2024-07-24 15:32:35.504853] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:13.968 [2024-07-24 15:32:35.505002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58543 ] 00:07:14.226 [2024-07-24 15:32:35.675601] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.226 [2024-07-24 15:32:35.675657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.485 [2024-07-24 15:32:36.045976] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:14.485 [2024-07-24 15:32:36.046379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.485 [2024-07-24 15:32:36.046517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.485 [2024-07-24 15:32:36.046531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:16.382 15:32:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:16.382 15:32:37 -- common/autotest_common.sh@852 -- # return 0 00:07:16.382 15:32:37 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:16.382 15:32:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.382 15:32:37 -- common/autotest_common.sh@10 -- # set +x 00:07:16.382 15:32:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.382 15:32:37 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.382 15:32:37 -- common/autotest_common.sh@640 -- # local es=0 00:07:16.382 15:32:37 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.382 15:32:37 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:07:16.383 15:32:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:16.383 15:32:37 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:07:16.383 15:32:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:16.383 15:32:37 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.383 15:32:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.383 15:32:37 -- common/autotest_common.sh@10 -- # set +x 00:07:16.383 [2024-07-24 15:32:37.895298] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58511 has claimed it. 00:07:16.383 request: 00:07:16.383 { 00:07:16.383 "method": "framework_enable_cpumask_locks", 00:07:16.383 "req_id": 1 00:07:16.383 } 00:07:16.383 Got JSON-RPC error response 00:07:16.383 response: 00:07:16.383 { 00:07:16.383 "code": -32603, 00:07:16.383 "message": "Failed to claim CPU core: 2" 00:07:16.383 } 00:07:16.383 15:32:37 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:07:16.383 15:32:37 -- common/autotest_common.sh@643 -- # es=1 00:07:16.383 15:32:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:16.383 15:32:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:16.383 15:32:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:16.383 15:32:37 -- event/cpu_locks.sh@158 -- # waitforlisten 58511 /var/tmp/spdk.sock 00:07:16.383 15:32:37 -- common/autotest_common.sh@819 -- # '[' -z 58511 ']' 00:07:16.383 15:32:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.383 15:32:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:16.383 15:32:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.383 15:32:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:16.383 15:32:37 -- common/autotest_common.sh@10 -- # set +x 00:07:16.640 15:32:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:16.640 15:32:38 -- common/autotest_common.sh@852 -- # return 0 00:07:16.640 15:32:38 -- event/cpu_locks.sh@159 -- # waitforlisten 58543 /var/tmp/spdk2.sock 00:07:16.640 15:32:38 -- common/autotest_common.sh@819 -- # '[' -z 58543 ']' 00:07:16.640 15:32:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.640 15:32:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:16.640 15:32:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.640 15:32:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:16.640 15:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:16.898 15:32:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:16.898 15:32:38 -- common/autotest_common.sh@852 -- # return 0 00:07:16.898 15:32:38 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:16.898 15:32:38 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:16.898 15:32:38 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:16.898 15:32:38 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:16.898 00:07:16.898 real 0m4.814s 00:07:16.898 user 0m1.961s 00:07:16.898 sys 0m0.276s 00:07:16.898 15:32:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.898 ************************************ 00:07:16.898 END TEST locking_overlapped_coremask_via_rpc 00:07:16.898 ************************************ 00:07:16.898 15:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:16.898 15:32:38 -- event/cpu_locks.sh@174 -- # cleanup 00:07:16.898 15:32:38 -- event/cpu_locks.sh@15 -- # [[ -z 58511 ]] 00:07:16.898 15:32:38 -- event/cpu_locks.sh@15 -- # killprocess 58511 00:07:16.898 15:32:38 -- common/autotest_common.sh@926 -- # '[' -z 58511 ']' 00:07:16.898 15:32:38 -- common/autotest_common.sh@930 -- # kill -0 58511 00:07:16.898 15:32:38 -- common/autotest_common.sh@931 -- # uname 00:07:16.898 15:32:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:16.898 15:32:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58511 00:07:16.898 15:32:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:16.898 killing process with pid 58511 00:07:16.898 15:32:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:16.898 15:32:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58511' 00:07:16.898 15:32:38 -- common/autotest_common.sh@945 -- # kill 58511 00:07:16.898 15:32:38 -- common/autotest_common.sh@950 -- # wait 58511 00:07:19.422 15:32:40 -- event/cpu_locks.sh@16 -- # [[ -z 58543 ]] 00:07:19.422 15:32:40 -- event/cpu_locks.sh@16 -- # killprocess 58543 00:07:19.422 15:32:40 -- common/autotest_common.sh@926 -- # '[' -z 58543 ']' 00:07:19.422 15:32:40 -- common/autotest_common.sh@930 -- # kill -0 58543 00:07:19.422 15:32:40 -- common/autotest_common.sh@931 -- # uname 00:07:19.422 15:32:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:19.422 15:32:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58543 00:07:19.422 15:32:40 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:07:19.422 15:32:40 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:07:19.422 killing process with pid 58543 00:07:19.422 15:32:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58543' 00:07:19.422 15:32:40 -- common/autotest_common.sh@945 -- # kill 58543 00:07:19.422 15:32:40 -- common/autotest_common.sh@950 -- # wait 58543 00:07:21.320 15:32:42 -- event/cpu_locks.sh@18 -- # rm -f 00:07:21.320 15:32:42 -- event/cpu_locks.sh@1 -- # cleanup 00:07:21.320 15:32:42 -- event/cpu_locks.sh@15 -- # [[ -z 58511 ]] 00:07:21.320 15:32:42 -- event/cpu_locks.sh@15 -- # killprocess 58511 00:07:21.320 15:32:42 -- common/autotest_common.sh@926 -- # '[' -z 58511 ']' 00:07:21.320 Process with pid 58511 is not found 00:07:21.320 15:32:42 -- common/autotest_common.sh@930 -- # kill -0 58511 00:07:21.320 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (58511) - No such process 00:07:21.320 15:32:42 -- common/autotest_common.sh@953 -- # echo 'Process with pid 58511 is not found' 00:07:21.320 15:32:42 -- event/cpu_locks.sh@16 -- # [[ -z 58543 ]] 00:07:21.320 15:32:42 -- event/cpu_locks.sh@16 -- # killprocess 58543 00:07:21.320 15:32:42 -- common/autotest_common.sh@926 -- # '[' -z 58543 ']' 00:07:21.320 15:32:42 -- common/autotest_common.sh@930 -- # kill -0 58543 00:07:21.320 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (58543) - No such process 00:07:21.320 Process with pid 58543 is not found 00:07:21.320 15:32:42 -- common/autotest_common.sh@953 -- # echo 'Process with pid 58543 is not found' 00:07:21.320 15:32:42 -- event/cpu_locks.sh@18 -- # rm -f 00:07:21.320 00:07:21.320 real 0m48.486s 00:07:21.320 user 1m25.568s 00:07:21.320 sys 0m5.929s 00:07:21.320 15:32:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.320 ************************************ 00:07:21.320 END TEST cpu_locks 00:07:21.320 ************************************ 00:07:21.320 15:32:42 -- common/autotest_common.sh@10 -- # set +x 00:07:21.320 ************************************ 00:07:21.320 END TEST event 00:07:21.320 ************************************ 00:07:21.320 00:07:21.320 real 1m19.419s 00:07:21.320 user 2m26.144s 00:07:21.320 sys 0m9.519s 00:07:21.320 15:32:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.320 15:32:42 -- common/autotest_common.sh@10 -- # set +x 00:07:21.320 15:32:42 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:21.320 15:32:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:21.320 15:32:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:21.320 15:32:42 -- common/autotest_common.sh@10 -- # set +x 00:07:21.320 ************************************ 00:07:21.320 START TEST thread 00:07:21.320 ************************************ 00:07:21.320 15:32:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:21.320 * Looking for test storage... 00:07:21.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:21.320 15:32:42 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:21.320 15:32:42 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:21.320 15:32:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:21.320 15:32:42 -- common/autotest_common.sh@10 -- # set +x 00:07:21.320 ************************************ 00:07:21.320 START TEST thread_poller_perf 00:07:21.320 ************************************ 00:07:21.320 15:32:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:21.320 [2024-07-24 15:32:42.817298] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:21.320 [2024-07-24 15:32:42.817468] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58720 ] 00:07:21.578 [2024-07-24 15:32:42.988777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.835 [2024-07-24 15:32:43.211159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.835 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:23.207 ====================================== 00:07:23.207 busy:2212213029 (cyc) 00:07:23.207 total_run_count: 284000 00:07:23.207 tsc_hz: 2200000000 (cyc) 00:07:23.207 ====================================== 00:07:23.207 poller_cost: 7789 (cyc), 3540 (nsec) 00:07:23.207 00:07:23.207 real 0m1.812s 00:07:23.207 user 0m1.597s 00:07:23.207 sys 0m0.103s 00:07:23.207 15:32:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.207 15:32:44 -- common/autotest_common.sh@10 -- # set +x 00:07:23.207 ************************************ 00:07:23.207 END TEST thread_poller_perf 00:07:23.207 ************************************ 00:07:23.207 15:32:44 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:23.207 15:32:44 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:23.207 15:32:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.207 15:32:44 -- common/autotest_common.sh@10 -- # set +x 00:07:23.207 ************************************ 00:07:23.207 START TEST thread_poller_perf 00:07:23.207 ************************************ 00:07:23.207 15:32:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:23.207 [2024-07-24 15:32:44.690427] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:23.207 [2024-07-24 15:32:44.690586] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58757 ] 00:07:23.465 [2024-07-24 15:32:44.861530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.465 [2024-07-24 15:32:45.043867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.465 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:24.836 ====================================== 00:07:24.836 busy:2204992001 (cyc) 00:07:24.836 total_run_count: 3808000 00:07:24.836 tsc_hz: 2200000000 (cyc) 00:07:24.836 ====================================== 00:07:24.836 poller_cost: 579 (cyc), 263 (nsec) 00:07:24.836 00:07:24.836 real 0m1.724s 00:07:24.836 user 0m1.508s 00:07:24.836 sys 0m0.106s 00:07:24.836 15:32:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.836 ************************************ 00:07:24.836 END TEST thread_poller_perf 00:07:24.836 15:32:46 -- common/autotest_common.sh@10 -- # set +x 00:07:24.836 ************************************ 00:07:24.836 15:32:46 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:24.836 ************************************ 00:07:24.836 END TEST thread 00:07:24.836 ************************************ 00:07:24.836 00:07:24.836 real 0m3.713s 00:07:24.836 user 0m3.173s 00:07:24.836 sys 0m0.309s 00:07:24.836 15:32:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.836 15:32:46 -- common/autotest_common.sh@10 -- # set +x 00:07:25.093 15:32:46 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:25.093 15:32:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:25.093 15:32:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.093 15:32:46 -- common/autotest_common.sh@10 -- # set +x 00:07:25.093 ************************************ 00:07:25.093 START TEST accel 00:07:25.093 ************************************ 00:07:25.093 15:32:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:25.093 * Looking for test storage... 00:07:25.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:25.093 15:32:46 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:25.093 15:32:46 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:25.093 15:32:46 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:25.093 15:32:46 -- accel/accel.sh@59 -- # spdk_tgt_pid=58837 00:07:25.093 15:32:46 -- accel/accel.sh@60 -- # waitforlisten 58837 00:07:25.093 15:32:46 -- common/autotest_common.sh@819 -- # '[' -z 58837 ']' 00:07:25.093 15:32:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.093 15:32:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:25.093 15:32:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.093 15:32:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:25.093 15:32:46 -- common/autotest_common.sh@10 -- # set +x 00:07:25.093 15:32:46 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:25.093 15:32:46 -- accel/accel.sh@58 -- # build_accel_config 00:07:25.093 15:32:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.093 15:32:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.093 15:32:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.093 15:32:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.093 15:32:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.093 15:32:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.093 15:32:46 -- accel/accel.sh@42 -- # jq -r . 00:07:25.093 [2024-07-24 15:32:46.646012] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:25.093 [2024-07-24 15:32:46.646221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58837 ] 00:07:25.350 [2024-07-24 15:32:46.815168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.608 [2024-07-24 15:32:47.017868] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:25.608 [2024-07-24 15:32:47.018161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.978 15:32:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:26.978 15:32:48 -- common/autotest_common.sh@852 -- # return 0 00:07:26.978 15:32:48 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:26.978 15:32:48 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:26.978 15:32:48 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:26.978 15:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:26.978 15:32:48 -- common/autotest_common.sh@10 -- # set +x 00:07:26.978 15:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:26.978 15:32:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # IFS== 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # read -r opc module 00:07:26.978 15:32:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:26.978 15:32:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # IFS== 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # read -r opc module 00:07:26.978 15:32:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:26.978 15:32:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # IFS== 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # read -r opc module 00:07:26.978 15:32:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:26.978 15:32:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # IFS== 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # read -r opc module 00:07:26.978 15:32:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:26.978 15:32:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # IFS== 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # read -r opc module 00:07:26.978 15:32:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:26.978 15:32:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # IFS== 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # read -r opc module 00:07:26.978 15:32:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:26.978 15:32:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # IFS== 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # read -r opc module 00:07:26.978 15:32:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:26.978 15:32:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # IFS== 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # read -r opc module 00:07:26.978 15:32:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:26.978 15:32:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # IFS== 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # read -r opc module 00:07:26.978 15:32:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:26.978 15:32:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # IFS== 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # read -r opc module 00:07:26.978 15:32:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:26.978 15:32:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # IFS== 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # read -r opc module 00:07:26.978 15:32:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:26.978 15:32:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # IFS== 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # read -r opc module 00:07:26.978 15:32:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:26.978 15:32:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # IFS== 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # read -r opc module 00:07:26.978 15:32:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:26.978 15:32:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # IFS== 00:07:26.978 15:32:48 -- accel/accel.sh@64 -- # read -r opc module 00:07:26.978 15:32:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:26.978 15:32:48 -- accel/accel.sh@67 -- # killprocess 58837 00:07:26.978 15:32:48 -- common/autotest_common.sh@926 -- # '[' -z 58837 ']' 00:07:26.978 15:32:48 -- common/autotest_common.sh@930 -- # kill -0 58837 00:07:26.978 15:32:48 -- common/autotest_common.sh@931 -- # uname 00:07:26.978 15:32:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:26.978 15:32:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58837 00:07:26.978 15:32:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:26.978 15:32:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:26.978 killing process with pid 58837 00:07:26.978 15:32:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58837' 00:07:26.978 15:32:48 -- common/autotest_common.sh@945 -- # kill 58837 00:07:26.978 15:32:48 -- common/autotest_common.sh@950 -- # wait 58837 00:07:28.877 15:32:50 -- accel/accel.sh@68 -- # trap - ERR 00:07:28.877 15:32:50 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:28.877 15:32:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:28.877 15:32:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.877 15:32:50 -- common/autotest_common.sh@10 -- # set +x 00:07:28.877 15:32:50 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:07:28.877 15:32:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:28.877 15:32:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.877 15:32:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.877 15:32:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.877 15:32:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.877 15:32:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.877 15:32:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.877 15:32:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.877 15:32:50 -- accel/accel.sh@42 -- # jq -r . 00:07:29.135 15:32:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.135 15:32:50 -- common/autotest_common.sh@10 -- # set +x 00:07:29.135 15:32:50 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:29.135 15:32:50 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:29.135 15:32:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.135 15:32:50 -- common/autotest_common.sh@10 -- # set +x 00:07:29.135 ************************************ 00:07:29.135 START TEST accel_missing_filename 00:07:29.135 ************************************ 00:07:29.135 15:32:50 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:07:29.135 15:32:50 -- common/autotest_common.sh@640 -- # local es=0 00:07:29.135 15:32:50 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:29.135 15:32:50 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:29.135 15:32:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:29.135 15:32:50 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:29.135 15:32:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:29.135 15:32:50 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:07:29.135 15:32:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:29.135 15:32:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.135 15:32:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.135 15:32:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.135 15:32:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.135 15:32:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.135 15:32:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.135 15:32:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.135 15:32:50 -- accel/accel.sh@42 -- # jq -r . 00:07:29.135 [2024-07-24 15:32:50.595014] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:29.135 [2024-07-24 15:32:50.595216] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58920 ] 00:07:29.393 [2024-07-24 15:32:50.762986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.393 [2024-07-24 15:32:50.934767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.651 [2024-07-24 15:32:51.109124] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.218 [2024-07-24 15:32:51.554480] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:30.476 A filename is required. 00:07:30.476 15:32:51 -- common/autotest_common.sh@643 -- # es=234 00:07:30.476 15:32:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:30.476 15:32:51 -- common/autotest_common.sh@652 -- # es=106 00:07:30.476 ************************************ 00:07:30.476 END TEST accel_missing_filename 00:07:30.476 ************************************ 00:07:30.476 15:32:51 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:30.476 15:32:51 -- common/autotest_common.sh@660 -- # es=1 00:07:30.476 15:32:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:30.476 00:07:30.476 real 0m1.357s 00:07:30.476 user 0m1.136s 00:07:30.476 sys 0m0.156s 00:07:30.476 15:32:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.476 15:32:51 -- common/autotest_common.sh@10 -- # set +x 00:07:30.476 15:32:51 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:30.476 15:32:51 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:30.476 15:32:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:30.476 15:32:51 -- common/autotest_common.sh@10 -- # set +x 00:07:30.476 ************************************ 00:07:30.476 START TEST accel_compress_verify 00:07:30.476 ************************************ 00:07:30.476 15:32:51 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:30.476 15:32:51 -- common/autotest_common.sh@640 -- # local es=0 00:07:30.476 15:32:51 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:30.476 15:32:51 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:30.476 15:32:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:30.476 15:32:51 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:30.476 15:32:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:30.476 15:32:51 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:30.476 15:32:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:30.476 15:32:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.476 15:32:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.476 15:32:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.476 15:32:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.476 15:32:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.476 15:32:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.476 15:32:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.476 15:32:51 -- accel/accel.sh@42 -- # jq -r . 00:07:30.476 [2024-07-24 15:32:52.009930] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:30.476 [2024-07-24 15:32:52.010115] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58951 ] 00:07:30.734 [2024-07-24 15:32:52.181575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.993 [2024-07-24 15:32:52.387043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.993 [2024-07-24 15:32:52.570941] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:31.560 [2024-07-24 15:32:52.985996] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:31.832 00:07:31.832 Compression does not support the verify option, aborting. 00:07:31.832 15:32:53 -- common/autotest_common.sh@643 -- # es=161 00:07:31.832 15:32:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:31.832 15:32:53 -- common/autotest_common.sh@652 -- # es=33 00:07:31.832 15:32:53 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:31.832 15:32:53 -- common/autotest_common.sh@660 -- # es=1 00:07:31.832 15:32:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:31.832 00:07:31.832 real 0m1.396s 00:07:31.832 user 0m1.184s 00:07:31.832 sys 0m0.156s 00:07:31.832 ************************************ 00:07:31.832 END TEST accel_compress_verify 00:07:31.832 ************************************ 00:07:31.832 15:32:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.832 15:32:53 -- common/autotest_common.sh@10 -- # set +x 00:07:31.832 15:32:53 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:31.832 15:32:53 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:31.832 15:32:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.832 15:32:53 -- common/autotest_common.sh@10 -- # set +x 00:07:31.832 ************************************ 00:07:31.832 START TEST accel_wrong_workload 00:07:31.832 ************************************ 00:07:31.832 15:32:53 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:07:31.832 15:32:53 -- common/autotest_common.sh@640 -- # local es=0 00:07:31.832 15:32:53 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:31.832 15:32:53 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:31.832 15:32:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:31.832 15:32:53 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:31.832 15:32:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:31.832 15:32:53 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:07:31.832 15:32:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:31.832 15:32:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.832 15:32:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.832 15:32:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.832 15:32:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.832 15:32:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.832 15:32:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.832 15:32:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.832 15:32:53 -- accel/accel.sh@42 -- # jq -r . 00:07:32.102 Unsupported workload type: foobar 00:07:32.102 [2024-07-24 15:32:53.449729] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:32.102 accel_perf options: 00:07:32.102 [-h help message] 00:07:32.102 [-q queue depth per core] 00:07:32.102 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:32.102 [-T number of threads per core 00:07:32.102 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:32.102 [-t time in seconds] 00:07:32.102 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:32.102 [ dif_verify, , dif_generate, dif_generate_copy 00:07:32.102 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:32.102 [-l for compress/decompress workloads, name of uncompressed input file 00:07:32.102 [-S for crc32c workload, use this seed value (default 0) 00:07:32.102 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:32.102 [-f for fill workload, use this BYTE value (default 255) 00:07:32.102 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:32.102 [-y verify result if this switch is on] 00:07:32.102 [-a tasks to allocate per core (default: same value as -q)] 00:07:32.102 Can be used to spread operations across a wider range of memory. 00:07:32.102 ************************************ 00:07:32.102 END TEST accel_wrong_workload 00:07:32.102 ************************************ 00:07:32.102 15:32:53 -- common/autotest_common.sh@643 -- # es=1 00:07:32.102 15:32:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:32.102 15:32:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:32.102 15:32:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:32.102 00:07:32.102 real 0m0.076s 00:07:32.102 user 0m0.083s 00:07:32.102 sys 0m0.043s 00:07:32.102 15:32:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.102 15:32:53 -- common/autotest_common.sh@10 -- # set +x 00:07:32.102 15:32:53 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:32.102 15:32:53 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:32.102 15:32:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.103 15:32:53 -- common/autotest_common.sh@10 -- # set +x 00:07:32.103 ************************************ 00:07:32.103 START TEST accel_negative_buffers 00:07:32.103 ************************************ 00:07:32.103 15:32:53 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:32.103 15:32:53 -- common/autotest_common.sh@640 -- # local es=0 00:07:32.103 15:32:53 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:32.103 15:32:53 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:32.103 15:32:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:32.103 15:32:53 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:32.103 15:32:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:32.103 15:32:53 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:07:32.103 15:32:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:32.103 15:32:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.103 15:32:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.103 15:32:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.103 15:32:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.103 15:32:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.103 15:32:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.103 15:32:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.103 15:32:53 -- accel/accel.sh@42 -- # jq -r . 00:07:32.103 -x option must be non-negative. 00:07:32.103 [2024-07-24 15:32:53.573314] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:32.103 accel_perf options: 00:07:32.103 [-h help message] 00:07:32.103 [-q queue depth per core] 00:07:32.103 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:32.103 [-T number of threads per core 00:07:32.103 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:32.103 [-t time in seconds] 00:07:32.103 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:32.103 [ dif_verify, , dif_generate, dif_generate_copy 00:07:32.103 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:32.103 [-l for compress/decompress workloads, name of uncompressed input file 00:07:32.103 [-S for crc32c workload, use this seed value (default 0) 00:07:32.103 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:32.103 [-f for fill workload, use this BYTE value (default 255) 00:07:32.103 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:32.103 [-y verify result if this switch is on] 00:07:32.103 [-a tasks to allocate per core (default: same value as -q)] 00:07:32.103 Can be used to spread operations across a wider range of memory. 00:07:32.103 15:32:53 -- common/autotest_common.sh@643 -- # es=1 00:07:32.103 15:32:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:32.103 15:32:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:32.103 15:32:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:32.103 00:07:32.103 real 0m0.071s 00:07:32.103 user 0m0.081s 00:07:32.103 sys 0m0.038s 00:07:32.103 15:32:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.103 15:32:53 -- common/autotest_common.sh@10 -- # set +x 00:07:32.103 ************************************ 00:07:32.103 END TEST accel_negative_buffers 00:07:32.103 ************************************ 00:07:32.103 15:32:53 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:32.103 15:32:53 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:32.103 15:32:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.103 15:32:53 -- common/autotest_common.sh@10 -- # set +x 00:07:32.103 ************************************ 00:07:32.103 START TEST accel_crc32c 00:07:32.103 ************************************ 00:07:32.103 15:32:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:32.103 15:32:53 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.103 15:32:53 -- accel/accel.sh@17 -- # local accel_module 00:07:32.103 15:32:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:32.103 15:32:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:32.103 15:32:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.103 15:32:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.103 15:32:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.103 15:32:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.103 15:32:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.103 15:32:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.103 15:32:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.103 15:32:53 -- accel/accel.sh@42 -- # jq -r . 00:07:32.361 [2024-07-24 15:32:53.702112] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:32.361 [2024-07-24 15:32:53.702270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59029 ] 00:07:32.361 [2024-07-24 15:32:53.880450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.619 [2024-07-24 15:32:54.113180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.519 15:32:56 -- accel/accel.sh@18 -- # out=' 00:07:34.519 SPDK Configuration: 00:07:34.519 Core mask: 0x1 00:07:34.519 00:07:34.519 Accel Perf Configuration: 00:07:34.519 Workload Type: crc32c 00:07:34.519 CRC-32C seed: 32 00:07:34.520 Transfer size: 4096 bytes 00:07:34.520 Vector count 1 00:07:34.520 Module: software 00:07:34.520 Queue depth: 32 00:07:34.520 Allocate depth: 32 00:07:34.520 # threads/core: 1 00:07:34.520 Run time: 1 seconds 00:07:34.520 Verify: Yes 00:07:34.520 00:07:34.520 Running for 1 seconds... 00:07:34.520 00:07:34.520 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:34.520 ------------------------------------------------------------------------------------ 00:07:34.520 0,0 387040/s 1511 MiB/s 0 0 00:07:34.520 ==================================================================================== 00:07:34.520 Total 387040/s 1511 MiB/s 0 0' 00:07:34.520 15:32:56 -- accel/accel.sh@20 -- # IFS=: 00:07:34.520 15:32:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:34.520 15:32:56 -- accel/accel.sh@20 -- # read -r var val 00:07:34.520 15:32:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:34.520 15:32:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.520 15:32:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.520 15:32:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.520 15:32:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.520 15:32:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.520 15:32:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.520 15:32:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.520 15:32:56 -- accel/accel.sh@42 -- # jq -r . 00:07:34.777 [2024-07-24 15:32:56.147445] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:34.778 [2024-07-24 15:32:56.147613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59055 ] 00:07:34.778 [2024-07-24 15:32:56.317654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.035 [2024-07-24 15:32:56.506823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.294 15:32:56 -- accel/accel.sh@21 -- # val= 00:07:35.294 15:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.294 15:32:56 -- accel/accel.sh@21 -- # val= 00:07:35.294 15:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.294 15:32:56 -- accel/accel.sh@21 -- # val=0x1 00:07:35.294 15:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.294 15:32:56 -- accel/accel.sh@21 -- # val= 00:07:35.294 15:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.294 15:32:56 -- accel/accel.sh@21 -- # val= 00:07:35.294 15:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.294 15:32:56 -- accel/accel.sh@21 -- # val=crc32c 00:07:35.294 15:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.294 15:32:56 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.294 15:32:56 -- accel/accel.sh@21 -- # val=32 00:07:35.294 15:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.294 15:32:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:35.294 15:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.294 15:32:56 -- accel/accel.sh@21 -- # val= 00:07:35.294 15:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.294 15:32:56 -- accel/accel.sh@21 -- # val=software 00:07:35.294 15:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.294 15:32:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.294 15:32:56 -- accel/accel.sh@21 -- # val=32 00:07:35.294 15:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.294 15:32:56 -- accel/accel.sh@21 -- # val=32 00:07:35.294 15:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.294 15:32:56 -- accel/accel.sh@21 -- # val=1 00:07:35.294 15:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.294 15:32:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:35.294 15:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.294 15:32:56 -- accel/accel.sh@21 -- # val=Yes 00:07:35.294 15:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.294 15:32:56 -- accel/accel.sh@21 -- # val= 00:07:35.294 15:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.294 15:32:56 -- accel/accel.sh@21 -- # val= 00:07:35.294 15:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.294 15:32:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.195 15:32:58 -- accel/accel.sh@21 -- # val= 00:07:37.195 15:32:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.195 15:32:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.195 15:32:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.195 15:32:58 -- accel/accel.sh@21 -- # val= 00:07:37.195 15:32:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.195 15:32:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.195 15:32:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.195 15:32:58 -- accel/accel.sh@21 -- # val= 00:07:37.195 15:32:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.195 15:32:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.195 15:32:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.195 15:32:58 -- accel/accel.sh@21 -- # val= 00:07:37.195 15:32:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.195 15:32:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.195 15:32:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.195 15:32:58 -- accel/accel.sh@21 -- # val= 00:07:37.195 15:32:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.195 15:32:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.195 15:32:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.195 15:32:58 -- accel/accel.sh@21 -- # val= 00:07:37.195 15:32:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.195 15:32:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.195 15:32:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.195 15:32:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.195 15:32:58 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:37.195 15:32:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.195 00:07:37.195 real 0m4.860s 00:07:37.195 user 0m4.334s 00:07:37.195 sys 0m0.309s 00:07:37.195 15:32:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.195 ************************************ 00:07:37.195 END TEST accel_crc32c 00:07:37.195 ************************************ 00:07:37.195 15:32:58 -- common/autotest_common.sh@10 -- # set +x 00:07:37.195 15:32:58 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:37.195 15:32:58 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:37.195 15:32:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.195 15:32:58 -- common/autotest_common.sh@10 -- # set +x 00:07:37.195 ************************************ 00:07:37.195 START TEST accel_crc32c_C2 00:07:37.195 ************************************ 00:07:37.195 15:32:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:37.195 15:32:58 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.195 15:32:58 -- accel/accel.sh@17 -- # local accel_module 00:07:37.195 15:32:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:37.195 15:32:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:37.195 15:32:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.195 15:32:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.195 15:32:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.195 15:32:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.195 15:32:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.195 15:32:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.195 15:32:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.195 15:32:58 -- accel/accel.sh@42 -- # jq -r . 00:07:37.195 [2024-07-24 15:32:58.611467] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:37.195 [2024-07-24 15:32:58.611652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59101 ] 00:07:37.195 [2024-07-24 15:32:58.782179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.454 [2024-07-24 15:32:58.957836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.353 15:33:00 -- accel/accel.sh@18 -- # out=' 00:07:39.353 SPDK Configuration: 00:07:39.353 Core mask: 0x1 00:07:39.353 00:07:39.353 Accel Perf Configuration: 00:07:39.353 Workload Type: crc32c 00:07:39.353 CRC-32C seed: 0 00:07:39.353 Transfer size: 4096 bytes 00:07:39.353 Vector count 2 00:07:39.353 Module: software 00:07:39.353 Queue depth: 32 00:07:39.353 Allocate depth: 32 00:07:39.353 # threads/core: 1 00:07:39.353 Run time: 1 seconds 00:07:39.353 Verify: Yes 00:07:39.354 00:07:39.354 Running for 1 seconds... 00:07:39.354 00:07:39.354 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:39.354 ------------------------------------------------------------------------------------ 00:07:39.354 0,0 307616/s 2403 MiB/s 0 0 00:07:39.354 ==================================================================================== 00:07:39.354 Total 307616/s 1201 MiB/s 0 0' 00:07:39.354 15:33:00 -- accel/accel.sh@20 -- # IFS=: 00:07:39.354 15:33:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:39.354 15:33:00 -- accel/accel.sh@20 -- # read -r var val 00:07:39.354 15:33:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:39.354 15:33:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.354 15:33:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.354 15:33:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.354 15:33:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.354 15:33:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.354 15:33:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.354 15:33:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.354 15:33:00 -- accel/accel.sh@42 -- # jq -r . 00:07:39.612 [2024-07-24 15:33:00.994965] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:39.612 [2024-07-24 15:33:00.995156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59134 ] 00:07:39.612 [2024-07-24 15:33:01.166127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.870 [2024-07-24 15:33:01.345215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.129 15:33:01 -- accel/accel.sh@21 -- # val= 00:07:40.129 15:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # IFS=: 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # read -r var val 00:07:40.129 15:33:01 -- accel/accel.sh@21 -- # val= 00:07:40.129 15:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # IFS=: 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # read -r var val 00:07:40.129 15:33:01 -- accel/accel.sh@21 -- # val=0x1 00:07:40.129 15:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # IFS=: 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # read -r var val 00:07:40.129 15:33:01 -- accel/accel.sh@21 -- # val= 00:07:40.129 15:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # IFS=: 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # read -r var val 00:07:40.129 15:33:01 -- accel/accel.sh@21 -- # val= 00:07:40.129 15:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # IFS=: 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # read -r var val 00:07:40.129 15:33:01 -- accel/accel.sh@21 -- # val=crc32c 00:07:40.129 15:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.129 15:33:01 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # IFS=: 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # read -r var val 00:07:40.129 15:33:01 -- accel/accel.sh@21 -- # val=0 00:07:40.129 15:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # IFS=: 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # read -r var val 00:07:40.129 15:33:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:40.129 15:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # IFS=: 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # read -r var val 00:07:40.129 15:33:01 -- accel/accel.sh@21 -- # val= 00:07:40.129 15:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # IFS=: 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # read -r var val 00:07:40.129 15:33:01 -- accel/accel.sh@21 -- # val=software 00:07:40.129 15:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.129 15:33:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # IFS=: 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # read -r var val 00:07:40.129 15:33:01 -- accel/accel.sh@21 -- # val=32 00:07:40.129 15:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # IFS=: 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # read -r var val 00:07:40.129 15:33:01 -- accel/accel.sh@21 -- # val=32 00:07:40.129 15:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # IFS=: 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # read -r var val 00:07:40.129 15:33:01 -- accel/accel.sh@21 -- # val=1 00:07:40.129 15:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # IFS=: 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # read -r var val 00:07:40.129 15:33:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:40.129 15:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # IFS=: 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # read -r var val 00:07:40.129 15:33:01 -- accel/accel.sh@21 -- # val=Yes 00:07:40.129 15:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # IFS=: 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # read -r var val 00:07:40.129 15:33:01 -- accel/accel.sh@21 -- # val= 00:07:40.129 15:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # IFS=: 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # read -r var val 00:07:40.129 15:33:01 -- accel/accel.sh@21 -- # val= 00:07:40.129 15:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # IFS=: 00:07:40.129 15:33:01 -- accel/accel.sh@20 -- # read -r var val 00:07:42.032 15:33:03 -- accel/accel.sh@21 -- # val= 00:07:42.032 15:33:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.032 15:33:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.032 15:33:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.032 15:33:03 -- accel/accel.sh@21 -- # val= 00:07:42.032 15:33:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.032 15:33:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.032 15:33:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.032 15:33:03 -- accel/accel.sh@21 -- # val= 00:07:42.032 15:33:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.032 15:33:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.032 15:33:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.032 15:33:03 -- accel/accel.sh@21 -- # val= 00:07:42.032 15:33:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.032 15:33:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.032 15:33:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.032 15:33:03 -- accel/accel.sh@21 -- # val= 00:07:42.032 15:33:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.032 15:33:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.032 15:33:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.032 15:33:03 -- accel/accel.sh@21 -- # val= 00:07:42.032 15:33:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.032 15:33:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.032 15:33:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.032 15:33:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:42.032 15:33:03 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:42.032 15:33:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.032 00:07:42.032 real 0m4.770s 00:07:42.032 user 0m4.259s 00:07:42.032 sys 0m0.299s 00:07:42.032 ************************************ 00:07:42.032 END TEST accel_crc32c_C2 00:07:42.032 ************************************ 00:07:42.032 15:33:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.032 15:33:03 -- common/autotest_common.sh@10 -- # set +x 00:07:42.032 15:33:03 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:42.032 15:33:03 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:42.032 15:33:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.032 15:33:03 -- common/autotest_common.sh@10 -- # set +x 00:07:42.032 ************************************ 00:07:42.032 START TEST accel_copy 00:07:42.032 ************************************ 00:07:42.032 15:33:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:07:42.032 15:33:03 -- accel/accel.sh@16 -- # local accel_opc 00:07:42.032 15:33:03 -- accel/accel.sh@17 -- # local accel_module 00:07:42.032 15:33:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:42.032 15:33:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:42.032 15:33:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.032 15:33:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.032 15:33:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.032 15:33:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.032 15:33:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.032 15:33:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.032 15:33:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.032 15:33:03 -- accel/accel.sh@42 -- # jq -r . 00:07:42.032 [2024-07-24 15:33:03.428504] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:42.032 [2024-07-24 15:33:03.428654] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59175 ] 00:07:42.032 [2024-07-24 15:33:03.599325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.291 [2024-07-24 15:33:03.793906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.229 15:33:05 -- accel/accel.sh@18 -- # out=' 00:07:44.229 SPDK Configuration: 00:07:44.229 Core mask: 0x1 00:07:44.229 00:07:44.229 Accel Perf Configuration: 00:07:44.229 Workload Type: copy 00:07:44.229 Transfer size: 4096 bytes 00:07:44.229 Vector count 1 00:07:44.229 Module: software 00:07:44.229 Queue depth: 32 00:07:44.229 Allocate depth: 32 00:07:44.229 # threads/core: 1 00:07:44.229 Run time: 1 seconds 00:07:44.229 Verify: Yes 00:07:44.229 00:07:44.229 Running for 1 seconds... 00:07:44.229 00:07:44.229 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:44.229 ------------------------------------------------------------------------------------ 00:07:44.229 0,0 249408/s 974 MiB/s 0 0 00:07:44.229 ==================================================================================== 00:07:44.229 Total 249408/s 974 MiB/s 0 0' 00:07:44.229 15:33:05 -- accel/accel.sh@20 -- # IFS=: 00:07:44.229 15:33:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:44.229 15:33:05 -- accel/accel.sh@20 -- # read -r var val 00:07:44.229 15:33:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:44.229 15:33:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.229 15:33:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.229 15:33:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.229 15:33:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.229 15:33:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.229 15:33:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.229 15:33:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.229 15:33:05 -- accel/accel.sh@42 -- # jq -r . 00:07:44.229 [2024-07-24 15:33:05.801679] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:44.229 [2024-07-24 15:33:05.801843] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59207 ] 00:07:44.488 [2024-07-24 15:33:05.971950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.747 [2024-07-24 15:33:06.155604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.747 15:33:06 -- accel/accel.sh@21 -- # val= 00:07:44.747 15:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.747 15:33:06 -- accel/accel.sh@21 -- # val= 00:07:44.747 15:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.747 15:33:06 -- accel/accel.sh@21 -- # val=0x1 00:07:44.747 15:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.747 15:33:06 -- accel/accel.sh@21 -- # val= 00:07:44.747 15:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.747 15:33:06 -- accel/accel.sh@21 -- # val= 00:07:44.747 15:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.747 15:33:06 -- accel/accel.sh@21 -- # val=copy 00:07:44.747 15:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.747 15:33:06 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.747 15:33:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:44.747 15:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.747 15:33:06 -- accel/accel.sh@21 -- # val= 00:07:44.747 15:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.747 15:33:06 -- accel/accel.sh@21 -- # val=software 00:07:44.747 15:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.747 15:33:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.747 15:33:06 -- accel/accel.sh@21 -- # val=32 00:07:44.747 15:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.747 15:33:06 -- accel/accel.sh@21 -- # val=32 00:07:44.747 15:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.747 15:33:06 -- accel/accel.sh@21 -- # val=1 00:07:44.747 15:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.747 15:33:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:44.747 15:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.747 15:33:06 -- accel/accel.sh@21 -- # val=Yes 00:07:44.747 15:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.747 15:33:06 -- accel/accel.sh@21 -- # val= 00:07:44.747 15:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.747 15:33:06 -- accel/accel.sh@21 -- # val= 00:07:44.747 15:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.747 15:33:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.651 15:33:08 -- accel/accel.sh@21 -- # val= 00:07:46.651 15:33:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.651 15:33:08 -- accel/accel.sh@20 -- # IFS=: 00:07:46.651 15:33:08 -- accel/accel.sh@20 -- # read -r var val 00:07:46.651 15:33:08 -- accel/accel.sh@21 -- # val= 00:07:46.651 15:33:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.651 15:33:08 -- accel/accel.sh@20 -- # IFS=: 00:07:46.651 15:33:08 -- accel/accel.sh@20 -- # read -r var val 00:07:46.651 15:33:08 -- accel/accel.sh@21 -- # val= 00:07:46.651 15:33:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.651 15:33:08 -- accel/accel.sh@20 -- # IFS=: 00:07:46.651 15:33:08 -- accel/accel.sh@20 -- # read -r var val 00:07:46.651 15:33:08 -- accel/accel.sh@21 -- # val= 00:07:46.651 15:33:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.651 15:33:08 -- accel/accel.sh@20 -- # IFS=: 00:07:46.651 15:33:08 -- accel/accel.sh@20 -- # read -r var val 00:07:46.651 15:33:08 -- accel/accel.sh@21 -- # val= 00:07:46.651 15:33:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.651 15:33:08 -- accel/accel.sh@20 -- # IFS=: 00:07:46.651 15:33:08 -- accel/accel.sh@20 -- # read -r var val 00:07:46.651 15:33:08 -- accel/accel.sh@21 -- # val= 00:07:46.651 15:33:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.651 15:33:08 -- accel/accel.sh@20 -- # IFS=: 00:07:46.651 15:33:08 -- accel/accel.sh@20 -- # read -r var val 00:07:46.651 15:33:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:46.651 15:33:08 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:46.651 15:33:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.651 00:07:46.651 real 0m4.725s 00:07:46.651 user 0m4.222s 00:07:46.651 sys 0m0.290s 00:07:46.651 15:33:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.651 15:33:08 -- common/autotest_common.sh@10 -- # set +x 00:07:46.651 ************************************ 00:07:46.651 END TEST accel_copy 00:07:46.651 ************************************ 00:07:46.651 15:33:08 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:46.651 15:33:08 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:46.651 15:33:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.651 15:33:08 -- common/autotest_common.sh@10 -- # set +x 00:07:46.651 ************************************ 00:07:46.651 START TEST accel_fill 00:07:46.651 ************************************ 00:07:46.651 15:33:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:46.651 15:33:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:46.651 15:33:08 -- accel/accel.sh@17 -- # local accel_module 00:07:46.651 15:33:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:46.651 15:33:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:46.651 15:33:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.651 15:33:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.651 15:33:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.651 15:33:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.651 15:33:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.651 15:33:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.651 15:33:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.651 15:33:08 -- accel/accel.sh@42 -- # jq -r . 00:07:46.651 [2024-07-24 15:33:08.206242] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:46.651 [2024-07-24 15:33:08.206417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59253 ] 00:07:46.910 [2024-07-24 15:33:08.377094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.169 [2024-07-24 15:33:08.554983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.071 15:33:10 -- accel/accel.sh@18 -- # out=' 00:07:49.071 SPDK Configuration: 00:07:49.071 Core mask: 0x1 00:07:49.071 00:07:49.071 Accel Perf Configuration: 00:07:49.071 Workload Type: fill 00:07:49.071 Fill pattern: 0x80 00:07:49.071 Transfer size: 4096 bytes 00:07:49.071 Vector count 1 00:07:49.071 Module: software 00:07:49.071 Queue depth: 64 00:07:49.071 Allocate depth: 64 00:07:49.071 # threads/core: 1 00:07:49.071 Run time: 1 seconds 00:07:49.071 Verify: Yes 00:07:49.071 00:07:49.071 Running for 1 seconds... 00:07:49.071 00:07:49.071 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:49.071 ------------------------------------------------------------------------------------ 00:07:49.071 0,0 372672/s 1455 MiB/s 0 0 00:07:49.071 ==================================================================================== 00:07:49.071 Total 372672/s 1455 MiB/s 0 0' 00:07:49.071 15:33:10 -- accel/accel.sh@20 -- # IFS=: 00:07:49.071 15:33:10 -- accel/accel.sh@20 -- # read -r var val 00:07:49.071 15:33:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:49.071 15:33:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:49.071 15:33:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.071 15:33:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.071 15:33:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.071 15:33:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.071 15:33:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.071 15:33:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.071 15:33:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.071 15:33:10 -- accel/accel.sh@42 -- # jq -r . 00:07:49.071 [2024-07-24 15:33:10.598801] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:49.071 [2024-07-24 15:33:10.598959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59279 ] 00:07:49.330 [2024-07-24 15:33:10.768457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.587 [2024-07-24 15:33:10.955731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.587 15:33:11 -- accel/accel.sh@21 -- # val= 00:07:49.587 15:33:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # IFS=: 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # read -r var val 00:07:49.588 15:33:11 -- accel/accel.sh@21 -- # val= 00:07:49.588 15:33:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # IFS=: 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # read -r var val 00:07:49.588 15:33:11 -- accel/accel.sh@21 -- # val=0x1 00:07:49.588 15:33:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # IFS=: 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # read -r var val 00:07:49.588 15:33:11 -- accel/accel.sh@21 -- # val= 00:07:49.588 15:33:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # IFS=: 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # read -r var val 00:07:49.588 15:33:11 -- accel/accel.sh@21 -- # val= 00:07:49.588 15:33:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # IFS=: 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # read -r var val 00:07:49.588 15:33:11 -- accel/accel.sh@21 -- # val=fill 00:07:49.588 15:33:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.588 15:33:11 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # IFS=: 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # read -r var val 00:07:49.588 15:33:11 -- accel/accel.sh@21 -- # val=0x80 00:07:49.588 15:33:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # IFS=: 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # read -r var val 00:07:49.588 15:33:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:49.588 15:33:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # IFS=: 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # read -r var val 00:07:49.588 15:33:11 -- accel/accel.sh@21 -- # val= 00:07:49.588 15:33:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # IFS=: 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # read -r var val 00:07:49.588 15:33:11 -- accel/accel.sh@21 -- # val=software 00:07:49.588 15:33:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.588 15:33:11 -- accel/accel.sh@23 -- # accel_module=software 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # IFS=: 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # read -r var val 00:07:49.588 15:33:11 -- accel/accel.sh@21 -- # val=64 00:07:49.588 15:33:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # IFS=: 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # read -r var val 00:07:49.588 15:33:11 -- accel/accel.sh@21 -- # val=64 00:07:49.588 15:33:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # IFS=: 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # read -r var val 00:07:49.588 15:33:11 -- accel/accel.sh@21 -- # val=1 00:07:49.588 15:33:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # IFS=: 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # read -r var val 00:07:49.588 15:33:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:49.588 15:33:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # IFS=: 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # read -r var val 00:07:49.588 15:33:11 -- accel/accel.sh@21 -- # val=Yes 00:07:49.588 15:33:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # IFS=: 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # read -r var val 00:07:49.588 15:33:11 -- accel/accel.sh@21 -- # val= 00:07:49.588 15:33:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # IFS=: 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # read -r var val 00:07:49.588 15:33:11 -- accel/accel.sh@21 -- # val= 00:07:49.588 15:33:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # IFS=: 00:07:49.588 15:33:11 -- accel/accel.sh@20 -- # read -r var val 00:07:51.488 15:33:12 -- accel/accel.sh@21 -- # val= 00:07:51.488 15:33:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.488 15:33:12 -- accel/accel.sh@20 -- # IFS=: 00:07:51.488 15:33:12 -- accel/accel.sh@20 -- # read -r var val 00:07:51.488 15:33:12 -- accel/accel.sh@21 -- # val= 00:07:51.488 15:33:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.488 15:33:12 -- accel/accel.sh@20 -- # IFS=: 00:07:51.488 15:33:12 -- accel/accel.sh@20 -- # read -r var val 00:07:51.488 15:33:12 -- accel/accel.sh@21 -- # val= 00:07:51.488 15:33:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.488 15:33:12 -- accel/accel.sh@20 -- # IFS=: 00:07:51.488 15:33:12 -- accel/accel.sh@20 -- # read -r var val 00:07:51.488 15:33:12 -- accel/accel.sh@21 -- # val= 00:07:51.488 15:33:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.488 15:33:12 -- accel/accel.sh@20 -- # IFS=: 00:07:51.488 15:33:12 -- accel/accel.sh@20 -- # read -r var val 00:07:51.488 15:33:12 -- accel/accel.sh@21 -- # val= 00:07:51.488 15:33:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.488 15:33:12 -- accel/accel.sh@20 -- # IFS=: 00:07:51.488 15:33:12 -- accel/accel.sh@20 -- # read -r var val 00:07:51.488 15:33:12 -- accel/accel.sh@21 -- # val= 00:07:51.488 15:33:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.488 15:33:12 -- accel/accel.sh@20 -- # IFS=: 00:07:51.488 15:33:12 -- accel/accel.sh@20 -- # read -r var val 00:07:51.488 15:33:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:51.488 15:33:12 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:51.488 15:33:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.488 00:07:51.488 real 0m4.785s 00:07:51.488 user 0m4.277s 00:07:51.488 sys 0m0.296s 00:07:51.488 15:33:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.488 ************************************ 00:07:51.488 END TEST accel_fill 00:07:51.488 ************************************ 00:07:51.488 15:33:12 -- common/autotest_common.sh@10 -- # set +x 00:07:51.488 15:33:12 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:51.488 15:33:12 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:51.488 15:33:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:51.488 15:33:12 -- common/autotest_common.sh@10 -- # set +x 00:07:51.488 ************************************ 00:07:51.488 START TEST accel_copy_crc32c 00:07:51.488 ************************************ 00:07:51.488 15:33:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:51.488 15:33:12 -- accel/accel.sh@16 -- # local accel_opc 00:07:51.488 15:33:12 -- accel/accel.sh@17 -- # local accel_module 00:07:51.488 15:33:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:51.488 15:33:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:51.488 15:33:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.488 15:33:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.488 15:33:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.488 15:33:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.488 15:33:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.488 15:33:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.488 15:33:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.488 15:33:12 -- accel/accel.sh@42 -- # jq -r . 00:07:51.488 [2024-07-24 15:33:13.043311] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:51.488 [2024-07-24 15:33:13.043484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59326 ] 00:07:51.746 [2024-07-24 15:33:13.214540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.003 [2024-07-24 15:33:13.395977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.901 15:33:15 -- accel/accel.sh@18 -- # out=' 00:07:53.901 SPDK Configuration: 00:07:53.901 Core mask: 0x1 00:07:53.901 00:07:53.901 Accel Perf Configuration: 00:07:53.901 Workload Type: copy_crc32c 00:07:53.901 CRC-32C seed: 0 00:07:53.901 Vector size: 4096 bytes 00:07:53.901 Transfer size: 4096 bytes 00:07:53.901 Vector count 1 00:07:53.901 Module: software 00:07:53.901 Queue depth: 32 00:07:53.901 Allocate depth: 32 00:07:53.901 # threads/core: 1 00:07:53.901 Run time: 1 seconds 00:07:53.901 Verify: Yes 00:07:53.901 00:07:53.901 Running for 1 seconds... 00:07:53.901 00:07:53.901 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:53.901 ------------------------------------------------------------------------------------ 00:07:53.901 0,0 202112/s 789 MiB/s 0 0 00:07:53.901 ==================================================================================== 00:07:53.901 Total 202112/s 789 MiB/s 0 0' 00:07:53.901 15:33:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.901 15:33:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.901 15:33:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:53.901 15:33:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:53.901 15:33:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.901 15:33:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.901 15:33:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.901 15:33:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.901 15:33:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.902 15:33:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.902 15:33:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.902 15:33:15 -- accel/accel.sh@42 -- # jq -r . 00:07:53.902 [2024-07-24 15:33:15.433935] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:53.902 [2024-07-24 15:33:15.434164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59352 ] 00:07:54.159 [2024-07-24 15:33:15.603717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.418 [2024-07-24 15:33:15.776622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.418 15:33:15 -- accel/accel.sh@21 -- # val= 00:07:54.418 15:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.418 15:33:15 -- accel/accel.sh@21 -- # val= 00:07:54.418 15:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.418 15:33:15 -- accel/accel.sh@21 -- # val=0x1 00:07:54.418 15:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.418 15:33:15 -- accel/accel.sh@21 -- # val= 00:07:54.418 15:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.418 15:33:15 -- accel/accel.sh@21 -- # val= 00:07:54.418 15:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.418 15:33:15 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:54.418 15:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.418 15:33:15 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.418 15:33:15 -- accel/accel.sh@21 -- # val=0 00:07:54.418 15:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.418 15:33:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:54.418 15:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.418 15:33:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:54.418 15:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.418 15:33:15 -- accel/accel.sh@21 -- # val= 00:07:54.418 15:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.418 15:33:15 -- accel/accel.sh@21 -- # val=software 00:07:54.418 15:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.418 15:33:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.418 15:33:15 -- accel/accel.sh@21 -- # val=32 00:07:54.418 15:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.418 15:33:15 -- accel/accel.sh@21 -- # val=32 00:07:54.418 15:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.418 15:33:15 -- accel/accel.sh@21 -- # val=1 00:07:54.418 15:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.418 15:33:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:54.418 15:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.418 15:33:15 -- accel/accel.sh@21 -- # val=Yes 00:07:54.418 15:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.418 15:33:15 -- accel/accel.sh@21 -- # val= 00:07:54.418 15:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.418 15:33:15 -- accel/accel.sh@21 -- # val= 00:07:54.418 15:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # IFS=: 00:07:54.418 15:33:15 -- accel/accel.sh@20 -- # read -r var val 00:07:56.320 15:33:17 -- accel/accel.sh@21 -- # val= 00:07:56.320 15:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.320 15:33:17 -- accel/accel.sh@20 -- # IFS=: 00:07:56.320 15:33:17 -- accel/accel.sh@20 -- # read -r var val 00:07:56.320 15:33:17 -- accel/accel.sh@21 -- # val= 00:07:56.320 15:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.320 15:33:17 -- accel/accel.sh@20 -- # IFS=: 00:07:56.320 15:33:17 -- accel/accel.sh@20 -- # read -r var val 00:07:56.320 15:33:17 -- accel/accel.sh@21 -- # val= 00:07:56.320 15:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.320 15:33:17 -- accel/accel.sh@20 -- # IFS=: 00:07:56.320 15:33:17 -- accel/accel.sh@20 -- # read -r var val 00:07:56.320 15:33:17 -- accel/accel.sh@21 -- # val= 00:07:56.320 15:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.320 15:33:17 -- accel/accel.sh@20 -- # IFS=: 00:07:56.320 15:33:17 -- accel/accel.sh@20 -- # read -r var val 00:07:56.320 15:33:17 -- accel/accel.sh@21 -- # val= 00:07:56.320 15:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.320 15:33:17 -- accel/accel.sh@20 -- # IFS=: 00:07:56.320 15:33:17 -- accel/accel.sh@20 -- # read -r var val 00:07:56.320 15:33:17 -- accel/accel.sh@21 -- # val= 00:07:56.320 15:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.320 15:33:17 -- accel/accel.sh@20 -- # IFS=: 00:07:56.320 15:33:17 -- accel/accel.sh@20 -- # read -r var val 00:07:56.320 ************************************ 00:07:56.320 END TEST accel_copy_crc32c 00:07:56.320 ************************************ 00:07:56.320 15:33:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:56.320 15:33:17 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:56.320 15:33:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.320 00:07:56.320 real 0m4.749s 00:07:56.320 user 0m4.245s 00:07:56.320 sys 0m0.294s 00:07:56.320 15:33:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.320 15:33:17 -- common/autotest_common.sh@10 -- # set +x 00:07:56.320 15:33:17 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:56.320 15:33:17 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:56.320 15:33:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.320 15:33:17 -- common/autotest_common.sh@10 -- # set +x 00:07:56.320 ************************************ 00:07:56.320 START TEST accel_copy_crc32c_C2 00:07:56.320 ************************************ 00:07:56.320 15:33:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:56.320 15:33:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:56.320 15:33:17 -- accel/accel.sh@17 -- # local accel_module 00:07:56.320 15:33:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:56.320 15:33:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:56.320 15:33:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:56.320 15:33:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:56.320 15:33:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.320 15:33:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.320 15:33:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:56.320 15:33:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:56.320 15:33:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:56.320 15:33:17 -- accel/accel.sh@42 -- # jq -r . 00:07:56.320 [2024-07-24 15:33:17.845393] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:56.320 [2024-07-24 15:33:17.845567] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59398 ] 00:07:56.578 [2024-07-24 15:33:18.015075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.837 [2024-07-24 15:33:18.186277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.738 15:33:20 -- accel/accel.sh@18 -- # out=' 00:07:58.738 SPDK Configuration: 00:07:58.738 Core mask: 0x1 00:07:58.738 00:07:58.738 Accel Perf Configuration: 00:07:58.738 Workload Type: copy_crc32c 00:07:58.738 CRC-32C seed: 0 00:07:58.738 Vector size: 4096 bytes 00:07:58.738 Transfer size: 8192 bytes 00:07:58.738 Vector count 2 00:07:58.738 Module: software 00:07:58.738 Queue depth: 32 00:07:58.738 Allocate depth: 32 00:07:58.738 # threads/core: 1 00:07:58.738 Run time: 1 seconds 00:07:58.738 Verify: Yes 00:07:58.738 00:07:58.738 Running for 1 seconds... 00:07:58.738 00:07:58.738 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:58.738 ------------------------------------------------------------------------------------ 00:07:58.738 0,0 144192/s 1126 MiB/s 0 0 00:07:58.738 ==================================================================================== 00:07:58.738 Total 144192/s 563 MiB/s 0 0' 00:07:58.738 15:33:20 -- accel/accel.sh@20 -- # IFS=: 00:07:58.738 15:33:20 -- accel/accel.sh@20 -- # read -r var val 00:07:58.738 15:33:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:58.738 15:33:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:58.738 15:33:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:58.738 15:33:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:58.738 15:33:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.738 15:33:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.738 15:33:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:58.738 15:33:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:58.738 15:33:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:58.738 15:33:20 -- accel/accel.sh@42 -- # jq -r . 00:07:58.738 [2024-07-24 15:33:20.205931] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:58.738 [2024-07-24 15:33:20.206103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59430 ] 00:07:58.996 [2024-07-24 15:33:20.375098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.996 [2024-07-24 15:33:20.545857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.255 15:33:20 -- accel/accel.sh@21 -- # val= 00:07:59.255 15:33:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.255 15:33:20 -- accel/accel.sh@21 -- # val= 00:07:59.255 15:33:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.255 15:33:20 -- accel/accel.sh@21 -- # val=0x1 00:07:59.255 15:33:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.255 15:33:20 -- accel/accel.sh@21 -- # val= 00:07:59.255 15:33:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.255 15:33:20 -- accel/accel.sh@21 -- # val= 00:07:59.255 15:33:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.255 15:33:20 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:59.255 15:33:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.255 15:33:20 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.255 15:33:20 -- accel/accel.sh@21 -- # val=0 00:07:59.255 15:33:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.255 15:33:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:59.255 15:33:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.255 15:33:20 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:59.255 15:33:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.255 15:33:20 -- accel/accel.sh@21 -- # val= 00:07:59.255 15:33:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.255 15:33:20 -- accel/accel.sh@21 -- # val=software 00:07:59.255 15:33:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.255 15:33:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.255 15:33:20 -- accel/accel.sh@21 -- # val=32 00:07:59.255 15:33:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.255 15:33:20 -- accel/accel.sh@21 -- # val=32 00:07:59.255 15:33:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.255 15:33:20 -- accel/accel.sh@21 -- # val=1 00:07:59.255 15:33:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.255 15:33:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:59.255 15:33:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.255 15:33:20 -- accel/accel.sh@21 -- # val=Yes 00:07:59.255 15:33:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.255 15:33:20 -- accel/accel.sh@21 -- # val= 00:07:59.255 15:33:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.255 15:33:20 -- accel/accel.sh@21 -- # val= 00:07:59.255 15:33:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.255 15:33:20 -- accel/accel.sh@20 -- # read -r var val 00:08:01.155 15:33:22 -- accel/accel.sh@21 -- # val= 00:08:01.155 15:33:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.155 15:33:22 -- accel/accel.sh@20 -- # IFS=: 00:08:01.155 15:33:22 -- accel/accel.sh@20 -- # read -r var val 00:08:01.155 15:33:22 -- accel/accel.sh@21 -- # val= 00:08:01.155 15:33:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.155 15:33:22 -- accel/accel.sh@20 -- # IFS=: 00:08:01.155 15:33:22 -- accel/accel.sh@20 -- # read -r var val 00:08:01.155 15:33:22 -- accel/accel.sh@21 -- # val= 00:08:01.155 15:33:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.155 15:33:22 -- accel/accel.sh@20 -- # IFS=: 00:08:01.155 15:33:22 -- accel/accel.sh@20 -- # read -r var val 00:08:01.155 15:33:22 -- accel/accel.sh@21 -- # val= 00:08:01.155 15:33:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.155 15:33:22 -- accel/accel.sh@20 -- # IFS=: 00:08:01.155 15:33:22 -- accel/accel.sh@20 -- # read -r var val 00:08:01.155 15:33:22 -- accel/accel.sh@21 -- # val= 00:08:01.155 15:33:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.155 15:33:22 -- accel/accel.sh@20 -- # IFS=: 00:08:01.155 15:33:22 -- accel/accel.sh@20 -- # read -r var val 00:08:01.155 15:33:22 -- accel/accel.sh@21 -- # val= 00:08:01.155 15:33:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.155 15:33:22 -- accel/accel.sh@20 -- # IFS=: 00:08:01.155 15:33:22 -- accel/accel.sh@20 -- # read -r var val 00:08:01.155 15:33:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:01.155 15:33:22 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:01.155 15:33:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:01.155 00:08:01.155 real 0m4.709s 00:08:01.155 user 0m4.215s 00:08:01.155 sys 0m0.284s 00:08:01.155 15:33:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.155 15:33:22 -- common/autotest_common.sh@10 -- # set +x 00:08:01.155 ************************************ 00:08:01.155 END TEST accel_copy_crc32c_C2 00:08:01.155 ************************************ 00:08:01.155 15:33:22 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:01.155 15:33:22 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:01.155 15:33:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:01.155 15:33:22 -- common/autotest_common.sh@10 -- # set +x 00:08:01.155 ************************************ 00:08:01.155 START TEST accel_dualcast 00:08:01.155 ************************************ 00:08:01.155 15:33:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:08:01.155 15:33:22 -- accel/accel.sh@16 -- # local accel_opc 00:08:01.155 15:33:22 -- accel/accel.sh@17 -- # local accel_module 00:08:01.155 15:33:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:08:01.155 15:33:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:01.155 15:33:22 -- accel/accel.sh@12 -- # build_accel_config 00:08:01.155 15:33:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:01.155 15:33:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.155 15:33:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.155 15:33:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:01.155 15:33:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:01.155 15:33:22 -- accel/accel.sh@41 -- # local IFS=, 00:08:01.155 15:33:22 -- accel/accel.sh@42 -- # jq -r . 00:08:01.155 [2024-07-24 15:33:22.606808] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:01.155 [2024-07-24 15:33:22.606973] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59471 ] 00:08:01.414 [2024-07-24 15:33:22.775217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.414 [2024-07-24 15:33:22.957096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.312 15:33:24 -- accel/accel.sh@18 -- # out=' 00:08:03.312 SPDK Configuration: 00:08:03.312 Core mask: 0x1 00:08:03.312 00:08:03.312 Accel Perf Configuration: 00:08:03.312 Workload Type: dualcast 00:08:03.312 Transfer size: 4096 bytes 00:08:03.312 Vector count 1 00:08:03.312 Module: software 00:08:03.312 Queue depth: 32 00:08:03.312 Allocate depth: 32 00:08:03.312 # threads/core: 1 00:08:03.312 Run time: 1 seconds 00:08:03.312 Verify: Yes 00:08:03.312 00:08:03.312 Running for 1 seconds... 00:08:03.312 00:08:03.312 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:03.312 ------------------------------------------------------------------------------------ 00:08:03.312 0,0 275168/s 1074 MiB/s 0 0 00:08:03.312 ==================================================================================== 00:08:03.312 Total 275168/s 1074 MiB/s 0 0' 00:08:03.571 15:33:24 -- accel/accel.sh@20 -- # IFS=: 00:08:03.571 15:33:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:03.571 15:33:24 -- accel/accel.sh@20 -- # read -r var val 00:08:03.571 15:33:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:03.571 15:33:24 -- accel/accel.sh@12 -- # build_accel_config 00:08:03.571 15:33:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:03.571 15:33:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.571 15:33:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.571 15:33:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:03.571 15:33:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:03.571 15:33:24 -- accel/accel.sh@41 -- # local IFS=, 00:08:03.571 15:33:24 -- accel/accel.sh@42 -- # jq -r . 00:08:03.571 [2024-07-24 15:33:24.954648] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:03.571 [2024-07-24 15:33:24.955308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59497 ] 00:08:03.571 [2024-07-24 15:33:25.130033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.829 [2024-07-24 15:33:25.334450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.088 15:33:25 -- accel/accel.sh@21 -- # val= 00:08:04.088 15:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # IFS=: 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # read -r var val 00:08:04.088 15:33:25 -- accel/accel.sh@21 -- # val= 00:08:04.088 15:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # IFS=: 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # read -r var val 00:08:04.088 15:33:25 -- accel/accel.sh@21 -- # val=0x1 00:08:04.088 15:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # IFS=: 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # read -r var val 00:08:04.088 15:33:25 -- accel/accel.sh@21 -- # val= 00:08:04.088 15:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # IFS=: 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # read -r var val 00:08:04.088 15:33:25 -- accel/accel.sh@21 -- # val= 00:08:04.088 15:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # IFS=: 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # read -r var val 00:08:04.088 15:33:25 -- accel/accel.sh@21 -- # val=dualcast 00:08:04.088 15:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.088 15:33:25 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # IFS=: 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # read -r var val 00:08:04.088 15:33:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:04.088 15:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # IFS=: 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # read -r var val 00:08:04.088 15:33:25 -- accel/accel.sh@21 -- # val= 00:08:04.088 15:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # IFS=: 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # read -r var val 00:08:04.088 15:33:25 -- accel/accel.sh@21 -- # val=software 00:08:04.088 15:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.088 15:33:25 -- accel/accel.sh@23 -- # accel_module=software 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # IFS=: 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # read -r var val 00:08:04.088 15:33:25 -- accel/accel.sh@21 -- # val=32 00:08:04.088 15:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # IFS=: 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # read -r var val 00:08:04.088 15:33:25 -- accel/accel.sh@21 -- # val=32 00:08:04.088 15:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # IFS=: 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # read -r var val 00:08:04.088 15:33:25 -- accel/accel.sh@21 -- # val=1 00:08:04.088 15:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # IFS=: 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # read -r var val 00:08:04.088 15:33:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:04.088 15:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # IFS=: 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # read -r var val 00:08:04.088 15:33:25 -- accel/accel.sh@21 -- # val=Yes 00:08:04.088 15:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # IFS=: 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # read -r var val 00:08:04.088 15:33:25 -- accel/accel.sh@21 -- # val= 00:08:04.088 15:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # IFS=: 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # read -r var val 00:08:04.088 15:33:25 -- accel/accel.sh@21 -- # val= 00:08:04.088 15:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # IFS=: 00:08:04.088 15:33:25 -- accel/accel.sh@20 -- # read -r var val 00:08:05.991 15:33:27 -- accel/accel.sh@21 -- # val= 00:08:05.991 15:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.991 15:33:27 -- accel/accel.sh@20 -- # IFS=: 00:08:05.991 15:33:27 -- accel/accel.sh@20 -- # read -r var val 00:08:05.991 15:33:27 -- accel/accel.sh@21 -- # val= 00:08:05.991 15:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.991 15:33:27 -- accel/accel.sh@20 -- # IFS=: 00:08:05.991 15:33:27 -- accel/accel.sh@20 -- # read -r var val 00:08:05.991 15:33:27 -- accel/accel.sh@21 -- # val= 00:08:05.991 15:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.991 15:33:27 -- accel/accel.sh@20 -- # IFS=: 00:08:05.991 15:33:27 -- accel/accel.sh@20 -- # read -r var val 00:08:05.991 15:33:27 -- accel/accel.sh@21 -- # val= 00:08:05.991 15:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.991 15:33:27 -- accel/accel.sh@20 -- # IFS=: 00:08:05.991 15:33:27 -- accel/accel.sh@20 -- # read -r var val 00:08:05.991 15:33:27 -- accel/accel.sh@21 -- # val= 00:08:05.991 15:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.991 15:33:27 -- accel/accel.sh@20 -- # IFS=: 00:08:05.991 15:33:27 -- accel/accel.sh@20 -- # read -r var val 00:08:05.991 15:33:27 -- accel/accel.sh@21 -- # val= 00:08:05.991 15:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.991 15:33:27 -- accel/accel.sh@20 -- # IFS=: 00:08:05.991 15:33:27 -- accel/accel.sh@20 -- # read -r var val 00:08:05.991 15:33:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:05.991 15:33:27 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:08:05.991 15:33:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.991 00:08:05.991 real 0m4.737s 00:08:05.991 user 0m4.228s 00:08:05.991 sys 0m0.299s 00:08:05.991 15:33:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.991 ************************************ 00:08:05.991 END TEST accel_dualcast 00:08:05.991 ************************************ 00:08:05.991 15:33:27 -- common/autotest_common.sh@10 -- # set +x 00:08:05.991 15:33:27 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:05.991 15:33:27 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:05.991 15:33:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:05.991 15:33:27 -- common/autotest_common.sh@10 -- # set +x 00:08:05.991 ************************************ 00:08:05.991 START TEST accel_compare 00:08:05.991 ************************************ 00:08:05.991 15:33:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:08:05.991 15:33:27 -- accel/accel.sh@16 -- # local accel_opc 00:08:05.991 15:33:27 -- accel/accel.sh@17 -- # local accel_module 00:08:05.991 15:33:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:08:05.991 15:33:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:05.991 15:33:27 -- accel/accel.sh@12 -- # build_accel_config 00:08:05.991 15:33:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:05.991 15:33:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.991 15:33:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.991 15:33:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:05.991 15:33:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:05.991 15:33:27 -- accel/accel.sh@41 -- # local IFS=, 00:08:05.991 15:33:27 -- accel/accel.sh@42 -- # jq -r . 00:08:05.991 [2024-07-24 15:33:27.393741] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:05.991 [2024-07-24 15:33:27.393891] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59549 ] 00:08:05.991 [2024-07-24 15:33:27.562898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.249 [2024-07-24 15:33:27.733029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.148 15:33:29 -- accel/accel.sh@18 -- # out=' 00:08:08.148 SPDK Configuration: 00:08:08.148 Core mask: 0x1 00:08:08.148 00:08:08.148 Accel Perf Configuration: 00:08:08.148 Workload Type: compare 00:08:08.148 Transfer size: 4096 bytes 00:08:08.148 Vector count 1 00:08:08.148 Module: software 00:08:08.148 Queue depth: 32 00:08:08.148 Allocate depth: 32 00:08:08.148 # threads/core: 1 00:08:08.148 Run time: 1 seconds 00:08:08.148 Verify: Yes 00:08:08.148 00:08:08.148 Running for 1 seconds... 00:08:08.148 00:08:08.148 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:08.148 ------------------------------------------------------------------------------------ 00:08:08.149 0,0 359936/s 1406 MiB/s 0 0 00:08:08.149 ==================================================================================== 00:08:08.149 Total 359936/s 1406 MiB/s 0 0' 00:08:08.149 15:33:29 -- accel/accel.sh@20 -- # IFS=: 00:08:08.149 15:33:29 -- accel/accel.sh@20 -- # read -r var val 00:08:08.149 15:33:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:08.149 15:33:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:08.149 15:33:29 -- accel/accel.sh@12 -- # build_accel_config 00:08:08.149 15:33:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:08.149 15:33:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.149 15:33:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.149 15:33:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:08.149 15:33:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:08.149 15:33:29 -- accel/accel.sh@41 -- # local IFS=, 00:08:08.149 15:33:29 -- accel/accel.sh@42 -- # jq -r . 00:08:08.406 [2024-07-24 15:33:29.747993] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:08.406 [2024-07-24 15:33:29.748171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59575 ] 00:08:08.406 [2024-07-24 15:33:29.919777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.664 [2024-07-24 15:33:30.106718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.922 15:33:30 -- accel/accel.sh@21 -- # val= 00:08:08.922 15:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # IFS=: 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # read -r var val 00:08:08.922 15:33:30 -- accel/accel.sh@21 -- # val= 00:08:08.922 15:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # IFS=: 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # read -r var val 00:08:08.922 15:33:30 -- accel/accel.sh@21 -- # val=0x1 00:08:08.922 15:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # IFS=: 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # read -r var val 00:08:08.922 15:33:30 -- accel/accel.sh@21 -- # val= 00:08:08.922 15:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # IFS=: 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # read -r var val 00:08:08.922 15:33:30 -- accel/accel.sh@21 -- # val= 00:08:08.922 15:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # IFS=: 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # read -r var val 00:08:08.922 15:33:30 -- accel/accel.sh@21 -- # val=compare 00:08:08.922 15:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.922 15:33:30 -- accel/accel.sh@24 -- # accel_opc=compare 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # IFS=: 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # read -r var val 00:08:08.922 15:33:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:08.922 15:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # IFS=: 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # read -r var val 00:08:08.922 15:33:30 -- accel/accel.sh@21 -- # val= 00:08:08.922 15:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # IFS=: 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # read -r var val 00:08:08.922 15:33:30 -- accel/accel.sh@21 -- # val=software 00:08:08.922 15:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.922 15:33:30 -- accel/accel.sh@23 -- # accel_module=software 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # IFS=: 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # read -r var val 00:08:08.922 15:33:30 -- accel/accel.sh@21 -- # val=32 00:08:08.922 15:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # IFS=: 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # read -r var val 00:08:08.922 15:33:30 -- accel/accel.sh@21 -- # val=32 00:08:08.922 15:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # IFS=: 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # read -r var val 00:08:08.922 15:33:30 -- accel/accel.sh@21 -- # val=1 00:08:08.922 15:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # IFS=: 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # read -r var val 00:08:08.922 15:33:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:08.922 15:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # IFS=: 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # read -r var val 00:08:08.922 15:33:30 -- accel/accel.sh@21 -- # val=Yes 00:08:08.922 15:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # IFS=: 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # read -r var val 00:08:08.922 15:33:30 -- accel/accel.sh@21 -- # val= 00:08:08.922 15:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # IFS=: 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # read -r var val 00:08:08.922 15:33:30 -- accel/accel.sh@21 -- # val= 00:08:08.922 15:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # IFS=: 00:08:08.922 15:33:30 -- accel/accel.sh@20 -- # read -r var val 00:08:10.819 15:33:32 -- accel/accel.sh@21 -- # val= 00:08:10.819 15:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.819 15:33:32 -- accel/accel.sh@20 -- # IFS=: 00:08:10.819 15:33:32 -- accel/accel.sh@20 -- # read -r var val 00:08:10.819 15:33:32 -- accel/accel.sh@21 -- # val= 00:08:10.819 15:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.819 15:33:32 -- accel/accel.sh@20 -- # IFS=: 00:08:10.819 15:33:32 -- accel/accel.sh@20 -- # read -r var val 00:08:10.819 15:33:32 -- accel/accel.sh@21 -- # val= 00:08:10.819 15:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.819 15:33:32 -- accel/accel.sh@20 -- # IFS=: 00:08:10.819 15:33:32 -- accel/accel.sh@20 -- # read -r var val 00:08:10.819 15:33:32 -- accel/accel.sh@21 -- # val= 00:08:10.819 15:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.819 15:33:32 -- accel/accel.sh@20 -- # IFS=: 00:08:10.819 15:33:32 -- accel/accel.sh@20 -- # read -r var val 00:08:10.819 15:33:32 -- accel/accel.sh@21 -- # val= 00:08:10.819 15:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.819 15:33:32 -- accel/accel.sh@20 -- # IFS=: 00:08:10.819 15:33:32 -- accel/accel.sh@20 -- # read -r var val 00:08:10.819 15:33:32 -- accel/accel.sh@21 -- # val= 00:08:10.819 15:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.819 15:33:32 -- accel/accel.sh@20 -- # IFS=: 00:08:10.819 15:33:32 -- accel/accel.sh@20 -- # read -r var val 00:08:10.819 15:33:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:10.819 15:33:32 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:08:10.819 15:33:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.819 00:08:10.819 real 0m4.753s 00:08:10.819 user 0m4.234s 00:08:10.819 sys 0m0.307s 00:08:10.819 15:33:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.819 15:33:32 -- common/autotest_common.sh@10 -- # set +x 00:08:10.819 ************************************ 00:08:10.819 END TEST accel_compare 00:08:10.819 ************************************ 00:08:10.819 15:33:32 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:10.819 15:33:32 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:10.819 15:33:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.819 15:33:32 -- common/autotest_common.sh@10 -- # set +x 00:08:10.819 ************************************ 00:08:10.819 START TEST accel_xor 00:08:10.819 ************************************ 00:08:10.819 15:33:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:08:10.819 15:33:32 -- accel/accel.sh@16 -- # local accel_opc 00:08:10.819 15:33:32 -- accel/accel.sh@17 -- # local accel_module 00:08:10.819 15:33:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:08:10.819 15:33:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:10.819 15:33:32 -- accel/accel.sh@12 -- # build_accel_config 00:08:10.819 15:33:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:10.819 15:33:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.819 15:33:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.819 15:33:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:10.819 15:33:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:10.819 15:33:32 -- accel/accel.sh@41 -- # local IFS=, 00:08:10.819 15:33:32 -- accel/accel.sh@42 -- # jq -r . 00:08:10.819 [2024-07-24 15:33:32.195960] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:10.819 [2024-07-24 15:33:32.196128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59616 ] 00:08:10.819 [2024-07-24 15:33:32.357558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.076 [2024-07-24 15:33:32.546563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.980 15:33:34 -- accel/accel.sh@18 -- # out=' 00:08:12.980 SPDK Configuration: 00:08:12.980 Core mask: 0x1 00:08:12.980 00:08:12.980 Accel Perf Configuration: 00:08:12.980 Workload Type: xor 00:08:12.980 Source buffers: 2 00:08:12.980 Transfer size: 4096 bytes 00:08:12.980 Vector count 1 00:08:12.980 Module: software 00:08:12.980 Queue depth: 32 00:08:12.980 Allocate depth: 32 00:08:12.980 # threads/core: 1 00:08:12.980 Run time: 1 seconds 00:08:12.980 Verify: Yes 00:08:12.980 00:08:12.980 Running for 1 seconds... 00:08:12.980 00:08:12.980 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:12.980 ------------------------------------------------------------------------------------ 00:08:12.980 0,0 199392/s 778 MiB/s 0 0 00:08:12.980 ==================================================================================== 00:08:12.980 Total 199392/s 778 MiB/s 0 0' 00:08:12.980 15:33:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.980 15:33:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:12.980 15:33:34 -- accel/accel.sh@20 -- # read -r var val 00:08:12.980 15:33:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:12.980 15:33:34 -- accel/accel.sh@12 -- # build_accel_config 00:08:12.980 15:33:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:12.980 15:33:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.980 15:33:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.980 15:33:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:12.980 15:33:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:12.980 15:33:34 -- accel/accel.sh@41 -- # local IFS=, 00:08:12.980 15:33:34 -- accel/accel.sh@42 -- # jq -r . 00:08:12.980 [2024-07-24 15:33:34.569314] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:12.980 [2024-07-24 15:33:34.569474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59652 ] 00:08:13.244 [2024-07-24 15:33:34.739959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.515 [2024-07-24 15:33:34.920199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.772 15:33:35 -- accel/accel.sh@21 -- # val= 00:08:13.772 15:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.772 15:33:35 -- accel/accel.sh@21 -- # val= 00:08:13.772 15:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.772 15:33:35 -- accel/accel.sh@21 -- # val=0x1 00:08:13.772 15:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.772 15:33:35 -- accel/accel.sh@21 -- # val= 00:08:13.772 15:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.772 15:33:35 -- accel/accel.sh@21 -- # val= 00:08:13.772 15:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.772 15:33:35 -- accel/accel.sh@21 -- # val=xor 00:08:13.772 15:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.772 15:33:35 -- accel/accel.sh@24 -- # accel_opc=xor 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.772 15:33:35 -- accel/accel.sh@21 -- # val=2 00:08:13.772 15:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.772 15:33:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:13.772 15:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.772 15:33:35 -- accel/accel.sh@21 -- # val= 00:08:13.772 15:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.772 15:33:35 -- accel/accel.sh@21 -- # val=software 00:08:13.772 15:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.772 15:33:35 -- accel/accel.sh@23 -- # accel_module=software 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.772 15:33:35 -- accel/accel.sh@21 -- # val=32 00:08:13.772 15:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.772 15:33:35 -- accel/accel.sh@21 -- # val=32 00:08:13.772 15:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.772 15:33:35 -- accel/accel.sh@21 -- # val=1 00:08:13.772 15:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.772 15:33:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:13.772 15:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.772 15:33:35 -- accel/accel.sh@21 -- # val=Yes 00:08:13.772 15:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.772 15:33:35 -- accel/accel.sh@21 -- # val= 00:08:13.772 15:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.772 15:33:35 -- accel/accel.sh@21 -- # val= 00:08:13.772 15:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.772 15:33:35 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 15:33:36 -- accel/accel.sh@21 -- # val= 00:08:15.672 15:33:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 15:33:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 15:33:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 15:33:36 -- accel/accel.sh@21 -- # val= 00:08:15.672 15:33:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 15:33:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 15:33:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 15:33:36 -- accel/accel.sh@21 -- # val= 00:08:15.672 15:33:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 15:33:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 15:33:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 15:33:36 -- accel/accel.sh@21 -- # val= 00:08:15.672 15:33:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 15:33:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 15:33:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 15:33:36 -- accel/accel.sh@21 -- # val= 00:08:15.672 15:33:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 15:33:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 15:33:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 15:33:36 -- accel/accel.sh@21 -- # val= 00:08:15.672 15:33:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 15:33:36 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 15:33:36 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 15:33:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:15.672 15:33:36 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:08:15.672 15:33:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.672 00:08:15.673 real 0m4.782s 00:08:15.673 user 0m4.293s 00:08:15.673 sys 0m0.275s 00:08:15.673 15:33:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.673 ************************************ 00:08:15.673 END TEST accel_xor 00:08:15.673 ************************************ 00:08:15.673 15:33:36 -- common/autotest_common.sh@10 -- # set +x 00:08:15.673 15:33:36 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:15.673 15:33:36 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:15.673 15:33:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:15.673 15:33:36 -- common/autotest_common.sh@10 -- # set +x 00:08:15.673 ************************************ 00:08:15.673 START TEST accel_xor 00:08:15.673 ************************************ 00:08:15.673 15:33:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:08:15.673 15:33:36 -- accel/accel.sh@16 -- # local accel_opc 00:08:15.673 15:33:36 -- accel/accel.sh@17 -- # local accel_module 00:08:15.673 15:33:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:08:15.673 15:33:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:15.673 15:33:36 -- accel/accel.sh@12 -- # build_accel_config 00:08:15.673 15:33:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:15.673 15:33:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.673 15:33:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.673 15:33:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:15.673 15:33:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:15.673 15:33:36 -- accel/accel.sh@41 -- # local IFS=, 00:08:15.673 15:33:36 -- accel/accel.sh@42 -- # jq -r . 00:08:15.673 [2024-07-24 15:33:37.020028] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:15.673 [2024-07-24 15:33:37.020174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59694 ] 00:08:15.673 [2024-07-24 15:33:37.179215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.930 [2024-07-24 15:33:37.363546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.830 15:33:39 -- accel/accel.sh@18 -- # out=' 00:08:17.830 SPDK Configuration: 00:08:17.830 Core mask: 0x1 00:08:17.830 00:08:17.830 Accel Perf Configuration: 00:08:17.830 Workload Type: xor 00:08:17.830 Source buffers: 3 00:08:17.830 Transfer size: 4096 bytes 00:08:17.830 Vector count 1 00:08:17.830 Module: software 00:08:17.830 Queue depth: 32 00:08:17.830 Allocate depth: 32 00:08:17.830 # threads/core: 1 00:08:17.830 Run time: 1 seconds 00:08:17.830 Verify: Yes 00:08:17.830 00:08:17.830 Running for 1 seconds... 00:08:17.830 00:08:17.830 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:17.830 ------------------------------------------------------------------------------------ 00:08:17.830 0,0 190720/s 745 MiB/s 0 0 00:08:17.830 ==================================================================================== 00:08:17.830 Total 190720/s 745 MiB/s 0 0' 00:08:17.830 15:33:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:17.830 15:33:39 -- accel/accel.sh@20 -- # IFS=: 00:08:17.830 15:33:39 -- accel/accel.sh@20 -- # read -r var val 00:08:17.830 15:33:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:17.830 15:33:39 -- accel/accel.sh@12 -- # build_accel_config 00:08:17.830 15:33:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:17.830 15:33:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.830 15:33:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.830 15:33:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:17.830 15:33:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:17.830 15:33:39 -- accel/accel.sh@41 -- # local IFS=, 00:08:17.830 15:33:39 -- accel/accel.sh@42 -- # jq -r . 00:08:17.830 [2024-07-24 15:33:39.417410] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:17.830 [2024-07-24 15:33:39.417553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59726 ] 00:08:18.088 [2024-07-24 15:33:39.588378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.347 [2024-07-24 15:33:39.778634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.605 15:33:39 -- accel/accel.sh@21 -- # val= 00:08:18.605 15:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.605 15:33:39 -- accel/accel.sh@21 -- # val= 00:08:18.605 15:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.605 15:33:39 -- accel/accel.sh@21 -- # val=0x1 00:08:18.605 15:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.605 15:33:39 -- accel/accel.sh@21 -- # val= 00:08:18.605 15:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.605 15:33:39 -- accel/accel.sh@21 -- # val= 00:08:18.605 15:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.605 15:33:39 -- accel/accel.sh@21 -- # val=xor 00:08:18.605 15:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.605 15:33:39 -- accel/accel.sh@24 -- # accel_opc=xor 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.605 15:33:39 -- accel/accel.sh@21 -- # val=3 00:08:18.605 15:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.605 15:33:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:18.605 15:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.605 15:33:39 -- accel/accel.sh@21 -- # val= 00:08:18.605 15:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.605 15:33:39 -- accel/accel.sh@21 -- # val=software 00:08:18.605 15:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.605 15:33:39 -- accel/accel.sh@23 -- # accel_module=software 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.605 15:33:39 -- accel/accel.sh@21 -- # val=32 00:08:18.605 15:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.605 15:33:39 -- accel/accel.sh@21 -- # val=32 00:08:18.605 15:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.605 15:33:39 -- accel/accel.sh@21 -- # val=1 00:08:18.605 15:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.605 15:33:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.606 15:33:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:18.606 15:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.606 15:33:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.606 15:33:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.606 15:33:39 -- accel/accel.sh@21 -- # val=Yes 00:08:18.606 15:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.606 15:33:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.606 15:33:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.606 15:33:39 -- accel/accel.sh@21 -- # val= 00:08:18.606 15:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.606 15:33:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.606 15:33:39 -- accel/accel.sh@20 -- # read -r var val 00:08:18.606 15:33:39 -- accel/accel.sh@21 -- # val= 00:08:18.606 15:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.606 15:33:39 -- accel/accel.sh@20 -- # IFS=: 00:08:18.606 15:33:39 -- accel/accel.sh@20 -- # read -r var val 00:08:20.521 15:33:41 -- accel/accel.sh@21 -- # val= 00:08:20.521 15:33:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.521 15:33:41 -- accel/accel.sh@20 -- # IFS=: 00:08:20.521 15:33:41 -- accel/accel.sh@20 -- # read -r var val 00:08:20.521 15:33:41 -- accel/accel.sh@21 -- # val= 00:08:20.521 15:33:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.521 15:33:41 -- accel/accel.sh@20 -- # IFS=: 00:08:20.521 15:33:41 -- accel/accel.sh@20 -- # read -r var val 00:08:20.521 15:33:41 -- accel/accel.sh@21 -- # val= 00:08:20.521 15:33:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.521 15:33:41 -- accel/accel.sh@20 -- # IFS=: 00:08:20.521 15:33:41 -- accel/accel.sh@20 -- # read -r var val 00:08:20.521 15:33:41 -- accel/accel.sh@21 -- # val= 00:08:20.521 15:33:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.521 15:33:41 -- accel/accel.sh@20 -- # IFS=: 00:08:20.521 15:33:41 -- accel/accel.sh@20 -- # read -r var val 00:08:20.521 15:33:41 -- accel/accel.sh@21 -- # val= 00:08:20.521 15:33:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.521 15:33:41 -- accel/accel.sh@20 -- # IFS=: 00:08:20.521 15:33:41 -- accel/accel.sh@20 -- # read -r var val 00:08:20.521 15:33:41 -- accel/accel.sh@21 -- # val= 00:08:20.521 15:33:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.521 15:33:41 -- accel/accel.sh@20 -- # IFS=: 00:08:20.521 15:33:41 -- accel/accel.sh@20 -- # read -r var val 00:08:20.521 ************************************ 00:08:20.521 END TEST accel_xor 00:08:20.521 ************************************ 00:08:20.521 15:33:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:20.521 15:33:41 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:08:20.521 15:33:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.521 00:08:20.521 real 0m4.808s 00:08:20.521 user 0m4.306s 00:08:20.521 sys 0m0.294s 00:08:20.521 15:33:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.521 15:33:41 -- common/autotest_common.sh@10 -- # set +x 00:08:20.521 15:33:41 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:20.521 15:33:41 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:20.521 15:33:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.521 15:33:41 -- common/autotest_common.sh@10 -- # set +x 00:08:20.521 ************************************ 00:08:20.521 START TEST accel_dif_verify 00:08:20.521 ************************************ 00:08:20.521 15:33:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:08:20.521 15:33:41 -- accel/accel.sh@16 -- # local accel_opc 00:08:20.521 15:33:41 -- accel/accel.sh@17 -- # local accel_module 00:08:20.521 15:33:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:08:20.521 15:33:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:20.521 15:33:41 -- accel/accel.sh@12 -- # build_accel_config 00:08:20.521 15:33:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:20.521 15:33:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.521 15:33:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.521 15:33:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:20.521 15:33:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:20.521 15:33:41 -- accel/accel.sh@41 -- # local IFS=, 00:08:20.521 15:33:41 -- accel/accel.sh@42 -- # jq -r . 00:08:20.521 [2024-07-24 15:33:41.885876] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:20.521 [2024-07-24 15:33:41.886025] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59767 ] 00:08:20.521 [2024-07-24 15:33:42.055508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.779 [2024-07-24 15:33:42.236954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.677 15:33:44 -- accel/accel.sh@18 -- # out=' 00:08:22.677 SPDK Configuration: 00:08:22.677 Core mask: 0x1 00:08:22.677 00:08:22.677 Accel Perf Configuration: 00:08:22.677 Workload Type: dif_verify 00:08:22.677 Vector size: 4096 bytes 00:08:22.677 Transfer size: 4096 bytes 00:08:22.677 Block size: 512 bytes 00:08:22.677 Metadata size: 8 bytes 00:08:22.677 Vector count 1 00:08:22.677 Module: software 00:08:22.677 Queue depth: 32 00:08:22.677 Allocate depth: 32 00:08:22.677 # threads/core: 1 00:08:22.677 Run time: 1 seconds 00:08:22.677 Verify: No 00:08:22.677 00:08:22.677 Running for 1 seconds... 00:08:22.677 00:08:22.677 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:22.677 ------------------------------------------------------------------------------------ 00:08:22.677 0,0 87584/s 347 MiB/s 0 0 00:08:22.677 ==================================================================================== 00:08:22.677 Total 87584/s 342 MiB/s 0 0' 00:08:22.677 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:22.677 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:22.677 15:33:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:22.677 15:33:44 -- accel/accel.sh@12 -- # build_accel_config 00:08:22.677 15:33:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:22.677 15:33:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:22.677 15:33:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.677 15:33:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.677 15:33:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:22.677 15:33:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:22.677 15:33:44 -- accel/accel.sh@41 -- # local IFS=, 00:08:22.677 15:33:44 -- accel/accel.sh@42 -- # jq -r . 00:08:22.935 [2024-07-24 15:33:44.303657] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:22.935 [2024-07-24 15:33:44.304324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59798 ] 00:08:22.935 [2024-07-24 15:33:44.469817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.193 [2024-07-24 15:33:44.650435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.452 15:33:44 -- accel/accel.sh@21 -- # val= 00:08:23.452 15:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:23.452 15:33:44 -- accel/accel.sh@21 -- # val= 00:08:23.452 15:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:23.452 15:33:44 -- accel/accel.sh@21 -- # val=0x1 00:08:23.452 15:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:23.452 15:33:44 -- accel/accel.sh@21 -- # val= 00:08:23.452 15:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:23.452 15:33:44 -- accel/accel.sh@21 -- # val= 00:08:23.452 15:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:23.452 15:33:44 -- accel/accel.sh@21 -- # val=dif_verify 00:08:23.452 15:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.452 15:33:44 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:23.452 15:33:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:23.452 15:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:23.452 15:33:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:23.452 15:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:23.452 15:33:44 -- accel/accel.sh@21 -- # val='512 bytes' 00:08:23.452 15:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:23.452 15:33:44 -- accel/accel.sh@21 -- # val='8 bytes' 00:08:23.452 15:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:23.452 15:33:44 -- accel/accel.sh@21 -- # val= 00:08:23.452 15:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:23.452 15:33:44 -- accel/accel.sh@21 -- # val=software 00:08:23.452 15:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.452 15:33:44 -- accel/accel.sh@23 -- # accel_module=software 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:23.452 15:33:44 -- accel/accel.sh@21 -- # val=32 00:08:23.452 15:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:23.452 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:23.453 15:33:44 -- accel/accel.sh@21 -- # val=32 00:08:23.453 15:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.453 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:23.453 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:23.453 15:33:44 -- accel/accel.sh@21 -- # val=1 00:08:23.453 15:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.453 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:23.453 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:23.453 15:33:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:23.453 15:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.453 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:23.453 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:23.453 15:33:44 -- accel/accel.sh@21 -- # val=No 00:08:23.453 15:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.453 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:23.453 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:23.453 15:33:44 -- accel/accel.sh@21 -- # val= 00:08:23.453 15:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.453 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:23.453 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:23.453 15:33:44 -- accel/accel.sh@21 -- # val= 00:08:23.453 15:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.453 15:33:44 -- accel/accel.sh@20 -- # IFS=: 00:08:23.453 15:33:44 -- accel/accel.sh@20 -- # read -r var val 00:08:25.356 15:33:46 -- accel/accel.sh@21 -- # val= 00:08:25.356 15:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.356 15:33:46 -- accel/accel.sh@20 -- # IFS=: 00:08:25.356 15:33:46 -- accel/accel.sh@20 -- # read -r var val 00:08:25.356 15:33:46 -- accel/accel.sh@21 -- # val= 00:08:25.356 15:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.356 15:33:46 -- accel/accel.sh@20 -- # IFS=: 00:08:25.356 15:33:46 -- accel/accel.sh@20 -- # read -r var val 00:08:25.356 15:33:46 -- accel/accel.sh@21 -- # val= 00:08:25.356 15:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.356 15:33:46 -- accel/accel.sh@20 -- # IFS=: 00:08:25.356 15:33:46 -- accel/accel.sh@20 -- # read -r var val 00:08:25.356 15:33:46 -- accel/accel.sh@21 -- # val= 00:08:25.356 15:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.356 15:33:46 -- accel/accel.sh@20 -- # IFS=: 00:08:25.356 15:33:46 -- accel/accel.sh@20 -- # read -r var val 00:08:25.356 15:33:46 -- accel/accel.sh@21 -- # val= 00:08:25.356 15:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.356 15:33:46 -- accel/accel.sh@20 -- # IFS=: 00:08:25.356 15:33:46 -- accel/accel.sh@20 -- # read -r var val 00:08:25.356 15:33:46 -- accel/accel.sh@21 -- # val= 00:08:25.356 15:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.356 15:33:46 -- accel/accel.sh@20 -- # IFS=: 00:08:25.356 15:33:46 -- accel/accel.sh@20 -- # read -r var val 00:08:25.356 15:33:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:25.356 ************************************ 00:08:25.356 END TEST accel_dif_verify 00:08:25.356 ************************************ 00:08:25.356 15:33:46 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:08:25.356 15:33:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.356 00:08:25.356 real 0m4.837s 00:08:25.356 user 0m4.335s 00:08:25.356 sys 0m0.290s 00:08:25.357 15:33:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.357 15:33:46 -- common/autotest_common.sh@10 -- # set +x 00:08:25.357 15:33:46 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:25.357 15:33:46 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:25.357 15:33:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.357 15:33:46 -- common/autotest_common.sh@10 -- # set +x 00:08:25.357 ************************************ 00:08:25.357 START TEST accel_dif_generate 00:08:25.357 ************************************ 00:08:25.357 15:33:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:08:25.357 15:33:46 -- accel/accel.sh@16 -- # local accel_opc 00:08:25.357 15:33:46 -- accel/accel.sh@17 -- # local accel_module 00:08:25.357 15:33:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:08:25.357 15:33:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:25.357 15:33:46 -- accel/accel.sh@12 -- # build_accel_config 00:08:25.357 15:33:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:25.357 15:33:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.357 15:33:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.357 15:33:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:25.357 15:33:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:25.357 15:33:46 -- accel/accel.sh@41 -- # local IFS=, 00:08:25.357 15:33:46 -- accel/accel.sh@42 -- # jq -r . 00:08:25.357 [2024-07-24 15:33:46.773378] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:25.357 [2024-07-24 15:33:46.773562] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59845 ] 00:08:25.615 [2024-07-24 15:33:46.954616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.615 [2024-07-24 15:33:47.137678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.143 15:33:49 -- accel/accel.sh@18 -- # out=' 00:08:28.143 SPDK Configuration: 00:08:28.143 Core mask: 0x1 00:08:28.143 00:08:28.143 Accel Perf Configuration: 00:08:28.143 Workload Type: dif_generate 00:08:28.143 Vector size: 4096 bytes 00:08:28.143 Transfer size: 4096 bytes 00:08:28.143 Block size: 512 bytes 00:08:28.143 Metadata size: 8 bytes 00:08:28.143 Vector count 1 00:08:28.143 Module: software 00:08:28.143 Queue depth: 32 00:08:28.143 Allocate depth: 32 00:08:28.143 # threads/core: 1 00:08:28.143 Run time: 1 seconds 00:08:28.143 Verify: No 00:08:28.143 00:08:28.143 Running for 1 seconds... 00:08:28.143 00:08:28.143 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:28.143 ------------------------------------------------------------------------------------ 00:08:28.143 0,0 106912/s 424 MiB/s 0 0 00:08:28.143 ==================================================================================== 00:08:28.143 Total 106912/s 417 MiB/s 0 0' 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.143 15:33:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.143 15:33:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:28.143 15:33:49 -- accel/accel.sh@12 -- # build_accel_config 00:08:28.143 15:33:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:28.143 15:33:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.143 15:33:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.143 15:33:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:28.143 15:33:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:28.143 15:33:49 -- accel/accel.sh@41 -- # local IFS=, 00:08:28.143 15:33:49 -- accel/accel.sh@42 -- # jq -r . 00:08:28.143 [2024-07-24 15:33:49.200432] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:28.143 [2024-07-24 15:33:49.201182] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59872 ] 00:08:28.143 [2024-07-24 15:33:49.366176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.143 [2024-07-24 15:33:49.548141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.143 15:33:49 -- accel/accel.sh@21 -- # val= 00:08:28.143 15:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.143 15:33:49 -- accel/accel.sh@21 -- # val= 00:08:28.143 15:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.143 15:33:49 -- accel/accel.sh@21 -- # val=0x1 00:08:28.143 15:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.143 15:33:49 -- accel/accel.sh@21 -- # val= 00:08:28.143 15:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.143 15:33:49 -- accel/accel.sh@21 -- # val= 00:08:28.143 15:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.143 15:33:49 -- accel/accel.sh@21 -- # val=dif_generate 00:08:28.143 15:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.143 15:33:49 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.143 15:33:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:28.143 15:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.143 15:33:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:28.143 15:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.143 15:33:49 -- accel/accel.sh@21 -- # val='512 bytes' 00:08:28.143 15:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.143 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.143 15:33:49 -- accel/accel.sh@21 -- # val='8 bytes' 00:08:28.144 15:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.144 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.144 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.144 15:33:49 -- accel/accel.sh@21 -- # val= 00:08:28.144 15:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.144 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.144 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.144 15:33:49 -- accel/accel.sh@21 -- # val=software 00:08:28.144 15:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.144 15:33:49 -- accel/accel.sh@23 -- # accel_module=software 00:08:28.144 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.144 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.144 15:33:49 -- accel/accel.sh@21 -- # val=32 00:08:28.402 15:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.402 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.402 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.402 15:33:49 -- accel/accel.sh@21 -- # val=32 00:08:28.402 15:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.402 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.402 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.402 15:33:49 -- accel/accel.sh@21 -- # val=1 00:08:28.402 15:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.402 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.402 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.402 15:33:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:28.402 15:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.402 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.402 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.402 15:33:49 -- accel/accel.sh@21 -- # val=No 00:08:28.402 15:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.402 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.402 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.402 15:33:49 -- accel/accel.sh@21 -- # val= 00:08:28.402 15:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.402 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.402 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:28.402 15:33:49 -- accel/accel.sh@21 -- # val= 00:08:28.402 15:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.402 15:33:49 -- accel/accel.sh@20 -- # IFS=: 00:08:28.402 15:33:49 -- accel/accel.sh@20 -- # read -r var val 00:08:30.304 15:33:51 -- accel/accel.sh@21 -- # val= 00:08:30.304 15:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.304 15:33:51 -- accel/accel.sh@20 -- # IFS=: 00:08:30.304 15:33:51 -- accel/accel.sh@20 -- # read -r var val 00:08:30.304 15:33:51 -- accel/accel.sh@21 -- # val= 00:08:30.304 15:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.304 15:33:51 -- accel/accel.sh@20 -- # IFS=: 00:08:30.304 15:33:51 -- accel/accel.sh@20 -- # read -r var val 00:08:30.304 15:33:51 -- accel/accel.sh@21 -- # val= 00:08:30.304 15:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.304 15:33:51 -- accel/accel.sh@20 -- # IFS=: 00:08:30.304 15:33:51 -- accel/accel.sh@20 -- # read -r var val 00:08:30.304 15:33:51 -- accel/accel.sh@21 -- # val= 00:08:30.304 15:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.304 15:33:51 -- accel/accel.sh@20 -- # IFS=: 00:08:30.304 15:33:51 -- accel/accel.sh@20 -- # read -r var val 00:08:30.304 15:33:51 -- accel/accel.sh@21 -- # val= 00:08:30.304 15:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.304 15:33:51 -- accel/accel.sh@20 -- # IFS=: 00:08:30.304 15:33:51 -- accel/accel.sh@20 -- # read -r var val 00:08:30.304 15:33:51 -- accel/accel.sh@21 -- # val= 00:08:30.304 15:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.304 15:33:51 -- accel/accel.sh@20 -- # IFS=: 00:08:30.304 15:33:51 -- accel/accel.sh@20 -- # read -r var val 00:08:30.304 15:33:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:30.304 15:33:51 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:08:30.304 15:33:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.304 ************************************ 00:08:30.304 END TEST accel_dif_generate 00:08:30.304 ************************************ 00:08:30.304 00:08:30.304 real 0m4.832s 00:08:30.304 user 0m4.341s 00:08:30.304 sys 0m0.277s 00:08:30.304 15:33:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.304 15:33:51 -- common/autotest_common.sh@10 -- # set +x 00:08:30.304 15:33:51 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:30.304 15:33:51 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:30.304 15:33:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:30.304 15:33:51 -- common/autotest_common.sh@10 -- # set +x 00:08:30.304 ************************************ 00:08:30.304 START TEST accel_dif_generate_copy 00:08:30.304 ************************************ 00:08:30.304 15:33:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:08:30.304 15:33:51 -- accel/accel.sh@16 -- # local accel_opc 00:08:30.304 15:33:51 -- accel/accel.sh@17 -- # local accel_module 00:08:30.304 15:33:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:08:30.304 15:33:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:30.304 15:33:51 -- accel/accel.sh@12 -- # build_accel_config 00:08:30.304 15:33:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:30.304 15:33:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.304 15:33:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.304 15:33:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:30.304 15:33:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:30.304 15:33:51 -- accel/accel.sh@41 -- # local IFS=, 00:08:30.304 15:33:51 -- accel/accel.sh@42 -- # jq -r . 00:08:30.304 [2024-07-24 15:33:51.634558] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:30.304 [2024-07-24 15:33:51.634689] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59924 ] 00:08:30.304 [2024-07-24 15:33:51.794513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.562 [2024-07-24 15:33:51.977240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.459 15:33:53 -- accel/accel.sh@18 -- # out=' 00:08:32.459 SPDK Configuration: 00:08:32.459 Core mask: 0x1 00:08:32.459 00:08:32.459 Accel Perf Configuration: 00:08:32.460 Workload Type: dif_generate_copy 00:08:32.460 Vector size: 4096 bytes 00:08:32.460 Transfer size: 4096 bytes 00:08:32.460 Vector count 1 00:08:32.460 Module: software 00:08:32.460 Queue depth: 32 00:08:32.460 Allocate depth: 32 00:08:32.460 # threads/core: 1 00:08:32.460 Run time: 1 seconds 00:08:32.460 Verify: No 00:08:32.460 00:08:32.460 Running for 1 seconds... 00:08:32.460 00:08:32.460 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:32.460 ------------------------------------------------------------------------------------ 00:08:32.460 0,0 76096/s 301 MiB/s 0 0 00:08:32.460 ==================================================================================== 00:08:32.460 Total 76096/s 297 MiB/s 0 0' 00:08:32.460 15:33:53 -- accel/accel.sh@20 -- # IFS=: 00:08:32.460 15:33:53 -- accel/accel.sh@20 -- # read -r var val 00:08:32.460 15:33:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:32.460 15:33:53 -- accel/accel.sh@12 -- # build_accel_config 00:08:32.460 15:33:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:32.460 15:33:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:32.460 15:33:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:32.460 15:33:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:32.460 15:33:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:32.460 15:33:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:32.460 15:33:53 -- accel/accel.sh@41 -- # local IFS=, 00:08:32.460 15:33:53 -- accel/accel.sh@42 -- # jq -r . 00:08:32.460 [2024-07-24 15:33:54.020267] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:32.460 [2024-07-24 15:33:54.020442] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59950 ] 00:08:32.717 [2024-07-24 15:33:54.189727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.005 [2024-07-24 15:33:54.370640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.005 15:33:54 -- accel/accel.sh@21 -- # val= 00:08:33.005 15:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # IFS=: 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # read -r var val 00:08:33.005 15:33:54 -- accel/accel.sh@21 -- # val= 00:08:33.005 15:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # IFS=: 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # read -r var val 00:08:33.005 15:33:54 -- accel/accel.sh@21 -- # val=0x1 00:08:33.005 15:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # IFS=: 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # read -r var val 00:08:33.005 15:33:54 -- accel/accel.sh@21 -- # val= 00:08:33.005 15:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # IFS=: 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # read -r var val 00:08:33.005 15:33:54 -- accel/accel.sh@21 -- # val= 00:08:33.005 15:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # IFS=: 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # read -r var val 00:08:33.005 15:33:54 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:08:33.005 15:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.005 15:33:54 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # IFS=: 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # read -r var val 00:08:33.005 15:33:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:33.005 15:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # IFS=: 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # read -r var val 00:08:33.005 15:33:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:33.005 15:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # IFS=: 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # read -r var val 00:08:33.005 15:33:54 -- accel/accel.sh@21 -- # val= 00:08:33.005 15:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # IFS=: 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # read -r var val 00:08:33.005 15:33:54 -- accel/accel.sh@21 -- # val=software 00:08:33.005 15:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.005 15:33:54 -- accel/accel.sh@23 -- # accel_module=software 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # IFS=: 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # read -r var val 00:08:33.005 15:33:54 -- accel/accel.sh@21 -- # val=32 00:08:33.005 15:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # IFS=: 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # read -r var val 00:08:33.005 15:33:54 -- accel/accel.sh@21 -- # val=32 00:08:33.005 15:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # IFS=: 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # read -r var val 00:08:33.005 15:33:54 -- accel/accel.sh@21 -- # val=1 00:08:33.005 15:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # IFS=: 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # read -r var val 00:08:33.005 15:33:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:33.005 15:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # IFS=: 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # read -r var val 00:08:33.005 15:33:54 -- accel/accel.sh@21 -- # val=No 00:08:33.005 15:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # IFS=: 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # read -r var val 00:08:33.005 15:33:54 -- accel/accel.sh@21 -- # val= 00:08:33.005 15:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # IFS=: 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # read -r var val 00:08:33.005 15:33:54 -- accel/accel.sh@21 -- # val= 00:08:33.005 15:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # IFS=: 00:08:33.005 15:33:54 -- accel/accel.sh@20 -- # read -r var val 00:08:34.903 15:33:56 -- accel/accel.sh@21 -- # val= 00:08:34.903 15:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.903 15:33:56 -- accel/accel.sh@20 -- # IFS=: 00:08:34.903 15:33:56 -- accel/accel.sh@20 -- # read -r var val 00:08:34.903 15:33:56 -- accel/accel.sh@21 -- # val= 00:08:34.903 15:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.903 15:33:56 -- accel/accel.sh@20 -- # IFS=: 00:08:34.903 15:33:56 -- accel/accel.sh@20 -- # read -r var val 00:08:34.903 15:33:56 -- accel/accel.sh@21 -- # val= 00:08:34.903 15:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.903 15:33:56 -- accel/accel.sh@20 -- # IFS=: 00:08:34.903 15:33:56 -- accel/accel.sh@20 -- # read -r var val 00:08:34.903 15:33:56 -- accel/accel.sh@21 -- # val= 00:08:34.903 15:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.903 15:33:56 -- accel/accel.sh@20 -- # IFS=: 00:08:34.903 15:33:56 -- accel/accel.sh@20 -- # read -r var val 00:08:34.903 15:33:56 -- accel/accel.sh@21 -- # val= 00:08:34.903 15:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.903 15:33:56 -- accel/accel.sh@20 -- # IFS=: 00:08:34.903 15:33:56 -- accel/accel.sh@20 -- # read -r var val 00:08:34.903 15:33:56 -- accel/accel.sh@21 -- # val= 00:08:34.903 15:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.903 15:33:56 -- accel/accel.sh@20 -- # IFS=: 00:08:34.904 15:33:56 -- accel/accel.sh@20 -- # read -r var val 00:08:34.904 ************************************ 00:08:34.904 END TEST accel_dif_generate_copy 00:08:34.904 ************************************ 00:08:34.904 15:33:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:34.904 15:33:56 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:08:34.904 15:33:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:34.904 00:08:34.904 real 0m4.776s 00:08:34.904 user 0m4.270s 00:08:34.904 sys 0m0.290s 00:08:34.904 15:33:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.904 15:33:56 -- common/autotest_common.sh@10 -- # set +x 00:08:34.904 15:33:56 -- accel/accel.sh@107 -- # [[ y == y ]] 00:08:34.904 15:33:56 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:34.904 15:33:56 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:08:34.904 15:33:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:34.904 15:33:56 -- common/autotest_common.sh@10 -- # set +x 00:08:34.904 ************************************ 00:08:34.904 START TEST accel_comp 00:08:34.904 ************************************ 00:08:34.904 15:33:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:34.904 15:33:56 -- accel/accel.sh@16 -- # local accel_opc 00:08:34.904 15:33:56 -- accel/accel.sh@17 -- # local accel_module 00:08:34.904 15:33:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:34.904 15:33:56 -- accel/accel.sh@12 -- # build_accel_config 00:08:34.904 15:33:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:34.904 15:33:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:34.904 15:33:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.904 15:33:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.904 15:33:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:34.904 15:33:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:34.904 15:33:56 -- accel/accel.sh@41 -- # local IFS=, 00:08:34.904 15:33:56 -- accel/accel.sh@42 -- # jq -r . 00:08:34.904 [2024-07-24 15:33:56.468736] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:34.904 [2024-07-24 15:33:56.468870] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59991 ] 00:08:35.162 [2024-07-24 15:33:56.630864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.421 [2024-07-24 15:33:56.815507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.323 15:33:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:37.323 00:08:37.323 SPDK Configuration: 00:08:37.323 Core mask: 0x1 00:08:37.323 00:08:37.323 Accel Perf Configuration: 00:08:37.323 Workload Type: compress 00:08:37.323 Transfer size: 4096 bytes 00:08:37.323 Vector count 1 00:08:37.323 Module: software 00:08:37.323 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:37.323 Queue depth: 32 00:08:37.323 Allocate depth: 32 00:08:37.323 # threads/core: 1 00:08:37.323 Run time: 1 seconds 00:08:37.323 Verify: No 00:08:37.323 00:08:37.323 Running for 1 seconds... 00:08:37.323 00:08:37.323 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:37.323 ------------------------------------------------------------------------------------ 00:08:37.323 0,0 45568/s 190 MiB/s 0 0 00:08:37.323 ==================================================================================== 00:08:37.323 Total 45568/s 178 MiB/s 0 0' 00:08:37.323 15:33:58 -- accel/accel.sh@20 -- # IFS=: 00:08:37.323 15:33:58 -- accel/accel.sh@20 -- # read -r var val 00:08:37.323 15:33:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:37.323 15:33:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:37.323 15:33:58 -- accel/accel.sh@12 -- # build_accel_config 00:08:37.323 15:33:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:37.323 15:33:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:37.323 15:33:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:37.323 15:33:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:37.323 15:33:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:37.323 15:33:58 -- accel/accel.sh@41 -- # local IFS=, 00:08:37.323 15:33:58 -- accel/accel.sh@42 -- # jq -r . 00:08:37.323 [2024-07-24 15:33:58.838489] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:37.323 [2024-07-24 15:33:58.838619] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60023 ] 00:08:37.582 [2024-07-24 15:33:58.998857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.839 [2024-07-24 15:33:59.180221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.839 15:33:59 -- accel/accel.sh@21 -- # val= 00:08:37.839 15:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.839 15:33:59 -- accel/accel.sh@20 -- # IFS=: 00:08:37.839 15:33:59 -- accel/accel.sh@20 -- # read -r var val 00:08:37.839 15:33:59 -- accel/accel.sh@21 -- # val= 00:08:37.839 15:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.839 15:33:59 -- accel/accel.sh@20 -- # IFS=: 00:08:37.839 15:33:59 -- accel/accel.sh@20 -- # read -r var val 00:08:37.839 15:33:59 -- accel/accel.sh@21 -- # val= 00:08:37.839 15:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.839 15:33:59 -- accel/accel.sh@20 -- # IFS=: 00:08:37.839 15:33:59 -- accel/accel.sh@20 -- # read -r var val 00:08:37.839 15:33:59 -- accel/accel.sh@21 -- # val=0x1 00:08:37.839 15:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.839 15:33:59 -- accel/accel.sh@20 -- # IFS=: 00:08:37.839 15:33:59 -- accel/accel.sh@20 -- # read -r var val 00:08:37.839 15:33:59 -- accel/accel.sh@21 -- # val= 00:08:37.839 15:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.839 15:33:59 -- accel/accel.sh@20 -- # IFS=: 00:08:37.839 15:33:59 -- accel/accel.sh@20 -- # read -r var val 00:08:37.839 15:33:59 -- accel/accel.sh@21 -- # val= 00:08:37.839 15:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.839 15:33:59 -- accel/accel.sh@20 -- # IFS=: 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # read -r var val 00:08:37.840 15:33:59 -- accel/accel.sh@21 -- # val=compress 00:08:37.840 15:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.840 15:33:59 -- accel/accel.sh@24 -- # accel_opc=compress 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # IFS=: 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # read -r var val 00:08:37.840 15:33:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:37.840 15:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # IFS=: 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # read -r var val 00:08:37.840 15:33:59 -- accel/accel.sh@21 -- # val= 00:08:37.840 15:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # IFS=: 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # read -r var val 00:08:37.840 15:33:59 -- accel/accel.sh@21 -- # val=software 00:08:37.840 15:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.840 15:33:59 -- accel/accel.sh@23 -- # accel_module=software 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # IFS=: 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # read -r var val 00:08:37.840 15:33:59 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:37.840 15:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # IFS=: 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # read -r var val 00:08:37.840 15:33:59 -- accel/accel.sh@21 -- # val=32 00:08:37.840 15:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # IFS=: 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # read -r var val 00:08:37.840 15:33:59 -- accel/accel.sh@21 -- # val=32 00:08:37.840 15:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # IFS=: 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # read -r var val 00:08:37.840 15:33:59 -- accel/accel.sh@21 -- # val=1 00:08:37.840 15:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # IFS=: 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # read -r var val 00:08:37.840 15:33:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:37.840 15:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # IFS=: 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # read -r var val 00:08:37.840 15:33:59 -- accel/accel.sh@21 -- # val=No 00:08:37.840 15:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # IFS=: 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # read -r var val 00:08:37.840 15:33:59 -- accel/accel.sh@21 -- # val= 00:08:37.840 15:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # IFS=: 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # read -r var val 00:08:37.840 15:33:59 -- accel/accel.sh@21 -- # val= 00:08:37.840 15:33:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # IFS=: 00:08:37.840 15:33:59 -- accel/accel.sh@20 -- # read -r var val 00:08:39.765 15:34:01 -- accel/accel.sh@21 -- # val= 00:08:39.765 15:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.765 15:34:01 -- accel/accel.sh@20 -- # IFS=: 00:08:39.765 15:34:01 -- accel/accel.sh@20 -- # read -r var val 00:08:39.765 15:34:01 -- accel/accel.sh@21 -- # val= 00:08:39.765 15:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.765 15:34:01 -- accel/accel.sh@20 -- # IFS=: 00:08:39.765 15:34:01 -- accel/accel.sh@20 -- # read -r var val 00:08:39.765 15:34:01 -- accel/accel.sh@21 -- # val= 00:08:39.765 15:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.765 15:34:01 -- accel/accel.sh@20 -- # IFS=: 00:08:39.765 15:34:01 -- accel/accel.sh@20 -- # read -r var val 00:08:39.765 15:34:01 -- accel/accel.sh@21 -- # val= 00:08:39.765 15:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.765 15:34:01 -- accel/accel.sh@20 -- # IFS=: 00:08:39.765 15:34:01 -- accel/accel.sh@20 -- # read -r var val 00:08:39.765 15:34:01 -- accel/accel.sh@21 -- # val= 00:08:39.765 15:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.765 15:34:01 -- accel/accel.sh@20 -- # IFS=: 00:08:39.765 15:34:01 -- accel/accel.sh@20 -- # read -r var val 00:08:39.765 15:34:01 -- accel/accel.sh@21 -- # val= 00:08:39.765 15:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.765 15:34:01 -- accel/accel.sh@20 -- # IFS=: 00:08:39.765 15:34:01 -- accel/accel.sh@20 -- # read -r var val 00:08:39.765 15:34:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:39.765 15:34:01 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:08:39.765 15:34:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:39.765 00:08:39.765 real 0m4.743s 00:08:39.765 user 0m4.255s 00:08:39.765 sys 0m0.274s 00:08:39.765 15:34:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.765 15:34:01 -- common/autotest_common.sh@10 -- # set +x 00:08:39.765 ************************************ 00:08:39.765 END TEST accel_comp 00:08:39.765 ************************************ 00:08:39.765 15:34:01 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:39.765 15:34:01 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:39.765 15:34:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:39.765 15:34:01 -- common/autotest_common.sh@10 -- # set +x 00:08:39.765 ************************************ 00:08:39.765 START TEST accel_decomp 00:08:39.765 ************************************ 00:08:39.765 15:34:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:39.765 15:34:01 -- accel/accel.sh@16 -- # local accel_opc 00:08:39.765 15:34:01 -- accel/accel.sh@17 -- # local accel_module 00:08:39.765 15:34:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:39.765 15:34:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:39.765 15:34:01 -- accel/accel.sh@12 -- # build_accel_config 00:08:39.765 15:34:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:39.765 15:34:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:39.765 15:34:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:39.765 15:34:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:39.765 15:34:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:39.765 15:34:01 -- accel/accel.sh@41 -- # local IFS=, 00:08:39.765 15:34:01 -- accel/accel.sh@42 -- # jq -r . 00:08:39.765 [2024-07-24 15:34:01.259672] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:39.765 [2024-07-24 15:34:01.259815] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60069 ] 00:08:40.024 [2024-07-24 15:34:01.427931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.024 [2024-07-24 15:34:01.604412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.553 15:34:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:42.553 00:08:42.553 SPDK Configuration: 00:08:42.553 Core mask: 0x1 00:08:42.553 00:08:42.553 Accel Perf Configuration: 00:08:42.553 Workload Type: decompress 00:08:42.553 Transfer size: 4096 bytes 00:08:42.553 Vector count 1 00:08:42.553 Module: software 00:08:42.553 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:42.553 Queue depth: 32 00:08:42.553 Allocate depth: 32 00:08:42.553 # threads/core: 1 00:08:42.553 Run time: 1 seconds 00:08:42.553 Verify: Yes 00:08:42.553 00:08:42.553 Running for 1 seconds... 00:08:42.553 00:08:42.553 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:42.553 ------------------------------------------------------------------------------------ 00:08:42.553 0,0 58624/s 108 MiB/s 0 0 00:08:42.553 ==================================================================================== 00:08:42.553 Total 58624/s 229 MiB/s 0 0' 00:08:42.553 15:34:03 -- accel/accel.sh@20 -- # IFS=: 00:08:42.553 15:34:03 -- accel/accel.sh@20 -- # read -r var val 00:08:42.553 15:34:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:42.553 15:34:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:42.553 15:34:03 -- accel/accel.sh@12 -- # build_accel_config 00:08:42.553 15:34:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:42.553 15:34:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:42.553 15:34:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:42.554 15:34:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:42.554 15:34:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:42.554 15:34:03 -- accel/accel.sh@41 -- # local IFS=, 00:08:42.554 15:34:03 -- accel/accel.sh@42 -- # jq -r . 00:08:42.554 [2024-07-24 15:34:03.656071] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:42.554 [2024-07-24 15:34:03.656237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60095 ] 00:08:42.554 [2024-07-24 15:34:03.824469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.554 [2024-07-24 15:34:04.006990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.812 15:34:04 -- accel/accel.sh@21 -- # val= 00:08:42.812 15:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # IFS=: 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # read -r var val 00:08:42.812 15:34:04 -- accel/accel.sh@21 -- # val= 00:08:42.812 15:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # IFS=: 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # read -r var val 00:08:42.812 15:34:04 -- accel/accel.sh@21 -- # val= 00:08:42.812 15:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # IFS=: 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # read -r var val 00:08:42.812 15:34:04 -- accel/accel.sh@21 -- # val=0x1 00:08:42.812 15:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # IFS=: 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # read -r var val 00:08:42.812 15:34:04 -- accel/accel.sh@21 -- # val= 00:08:42.812 15:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # IFS=: 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # read -r var val 00:08:42.812 15:34:04 -- accel/accel.sh@21 -- # val= 00:08:42.812 15:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # IFS=: 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # read -r var val 00:08:42.812 15:34:04 -- accel/accel.sh@21 -- # val=decompress 00:08:42.812 15:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.812 15:34:04 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # IFS=: 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # read -r var val 00:08:42.812 15:34:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:42.812 15:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # IFS=: 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # read -r var val 00:08:42.812 15:34:04 -- accel/accel.sh@21 -- # val= 00:08:42.812 15:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # IFS=: 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # read -r var val 00:08:42.812 15:34:04 -- accel/accel.sh@21 -- # val=software 00:08:42.812 15:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.812 15:34:04 -- accel/accel.sh@23 -- # accel_module=software 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # IFS=: 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # read -r var val 00:08:42.812 15:34:04 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:42.812 15:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # IFS=: 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # read -r var val 00:08:42.812 15:34:04 -- accel/accel.sh@21 -- # val=32 00:08:42.812 15:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # IFS=: 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # read -r var val 00:08:42.812 15:34:04 -- accel/accel.sh@21 -- # val=32 00:08:42.812 15:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # IFS=: 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # read -r var val 00:08:42.812 15:34:04 -- accel/accel.sh@21 -- # val=1 00:08:42.812 15:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # IFS=: 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # read -r var val 00:08:42.812 15:34:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:42.812 15:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # IFS=: 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # read -r var val 00:08:42.812 15:34:04 -- accel/accel.sh@21 -- # val=Yes 00:08:42.812 15:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # IFS=: 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # read -r var val 00:08:42.812 15:34:04 -- accel/accel.sh@21 -- # val= 00:08:42.812 15:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # IFS=: 00:08:42.812 15:34:04 -- accel/accel.sh@20 -- # read -r var val 00:08:42.812 15:34:04 -- accel/accel.sh@21 -- # val= 00:08:42.813 15:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.813 15:34:04 -- accel/accel.sh@20 -- # IFS=: 00:08:42.813 15:34:04 -- accel/accel.sh@20 -- # read -r var val 00:08:44.711 15:34:05 -- accel/accel.sh@21 -- # val= 00:08:44.711 15:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.711 15:34:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.711 15:34:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.711 15:34:05 -- accel/accel.sh@21 -- # val= 00:08:44.711 15:34:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.711 15:34:05 -- accel/accel.sh@20 -- # IFS=: 00:08:44.711 15:34:05 -- accel/accel.sh@20 -- # read -r var val 00:08:44.711 15:34:05 -- accel/accel.sh@21 -- # val= 00:08:44.711 15:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.711 15:34:06 -- accel/accel.sh@20 -- # IFS=: 00:08:44.711 15:34:06 -- accel/accel.sh@20 -- # read -r var val 00:08:44.711 15:34:06 -- accel/accel.sh@21 -- # val= 00:08:44.711 15:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.711 15:34:06 -- accel/accel.sh@20 -- # IFS=: 00:08:44.711 15:34:06 -- accel/accel.sh@20 -- # read -r var val 00:08:44.711 15:34:06 -- accel/accel.sh@21 -- # val= 00:08:44.711 15:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.711 15:34:06 -- accel/accel.sh@20 -- # IFS=: 00:08:44.711 15:34:06 -- accel/accel.sh@20 -- # read -r var val 00:08:44.711 15:34:06 -- accel/accel.sh@21 -- # val= 00:08:44.711 15:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.711 15:34:06 -- accel/accel.sh@20 -- # IFS=: 00:08:44.711 15:34:06 -- accel/accel.sh@20 -- # read -r var val 00:08:44.711 15:34:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:44.711 15:34:06 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:44.711 15:34:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:44.711 00:08:44.711 real 0m4.803s 00:08:44.711 user 0m4.293s 00:08:44.711 sys 0m0.296s 00:08:44.711 ************************************ 00:08:44.711 END TEST accel_decomp 00:08:44.711 15:34:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.711 15:34:06 -- common/autotest_common.sh@10 -- # set +x 00:08:44.711 ************************************ 00:08:44.711 15:34:06 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:44.711 15:34:06 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:44.711 15:34:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:44.711 15:34:06 -- common/autotest_common.sh@10 -- # set +x 00:08:44.711 ************************************ 00:08:44.711 START TEST accel_decmop_full 00:08:44.711 ************************************ 00:08:44.711 15:34:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:44.711 15:34:06 -- accel/accel.sh@16 -- # local accel_opc 00:08:44.711 15:34:06 -- accel/accel.sh@17 -- # local accel_module 00:08:44.711 15:34:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:44.711 15:34:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:44.711 15:34:06 -- accel/accel.sh@12 -- # build_accel_config 00:08:44.711 15:34:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:44.711 15:34:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:44.711 15:34:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:44.711 15:34:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:44.711 15:34:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:44.711 15:34:06 -- accel/accel.sh@41 -- # local IFS=, 00:08:44.711 15:34:06 -- accel/accel.sh@42 -- # jq -r . 00:08:44.711 [2024-07-24 15:34:06.114378] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:44.711 [2024-07-24 15:34:06.114541] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60142 ] 00:08:44.711 [2024-07-24 15:34:06.284356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.969 [2024-07-24 15:34:06.467242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.499 15:34:08 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:47.499 00:08:47.499 SPDK Configuration: 00:08:47.499 Core mask: 0x1 00:08:47.499 00:08:47.499 Accel Perf Configuration: 00:08:47.499 Workload Type: decompress 00:08:47.499 Transfer size: 111250 bytes 00:08:47.499 Vector count 1 00:08:47.499 Module: software 00:08:47.499 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:47.499 Queue depth: 32 00:08:47.499 Allocate depth: 32 00:08:47.499 # threads/core: 1 00:08:47.499 Run time: 1 seconds 00:08:47.499 Verify: Yes 00:08:47.499 00:08:47.499 Running for 1 seconds... 00:08:47.499 00:08:47.499 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:47.499 ------------------------------------------------------------------------------------ 00:08:47.499 0,0 4256/s 175 MiB/s 0 0 00:08:47.499 ==================================================================================== 00:08:47.499 Total 4256/s 451 MiB/s 0 0' 00:08:47.499 15:34:08 -- accel/accel.sh@20 -- # IFS=: 00:08:47.499 15:34:08 -- accel/accel.sh@20 -- # read -r var val 00:08:47.499 15:34:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:47.499 15:34:08 -- accel/accel.sh@12 -- # build_accel_config 00:08:47.499 15:34:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:47.499 15:34:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:47.499 15:34:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:47.499 15:34:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:47.499 15:34:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:47.499 15:34:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:47.499 15:34:08 -- accel/accel.sh@41 -- # local IFS=, 00:08:47.499 15:34:08 -- accel/accel.sh@42 -- # jq -r . 00:08:47.499 [2024-07-24 15:34:08.534359] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:47.499 [2024-07-24 15:34:08.534544] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60168 ] 00:08:47.499 [2024-07-24 15:34:08.709566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.499 [2024-07-24 15:34:08.930000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.758 15:34:09 -- accel/accel.sh@21 -- # val= 00:08:47.758 15:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # IFS=: 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # read -r var val 00:08:47.758 15:34:09 -- accel/accel.sh@21 -- # val= 00:08:47.758 15:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # IFS=: 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # read -r var val 00:08:47.758 15:34:09 -- accel/accel.sh@21 -- # val= 00:08:47.758 15:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # IFS=: 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # read -r var val 00:08:47.758 15:34:09 -- accel/accel.sh@21 -- # val=0x1 00:08:47.758 15:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # IFS=: 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # read -r var val 00:08:47.758 15:34:09 -- accel/accel.sh@21 -- # val= 00:08:47.758 15:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # IFS=: 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # read -r var val 00:08:47.758 15:34:09 -- accel/accel.sh@21 -- # val= 00:08:47.758 15:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # IFS=: 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # read -r var val 00:08:47.758 15:34:09 -- accel/accel.sh@21 -- # val=decompress 00:08:47.758 15:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.758 15:34:09 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # IFS=: 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # read -r var val 00:08:47.758 15:34:09 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:47.758 15:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # IFS=: 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # read -r var val 00:08:47.758 15:34:09 -- accel/accel.sh@21 -- # val= 00:08:47.758 15:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # IFS=: 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # read -r var val 00:08:47.758 15:34:09 -- accel/accel.sh@21 -- # val=software 00:08:47.758 15:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.758 15:34:09 -- accel/accel.sh@23 -- # accel_module=software 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # IFS=: 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # read -r var val 00:08:47.758 15:34:09 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:47.758 15:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # IFS=: 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # read -r var val 00:08:47.758 15:34:09 -- accel/accel.sh@21 -- # val=32 00:08:47.758 15:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # IFS=: 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # read -r var val 00:08:47.758 15:34:09 -- accel/accel.sh@21 -- # val=32 00:08:47.758 15:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # IFS=: 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # read -r var val 00:08:47.758 15:34:09 -- accel/accel.sh@21 -- # val=1 00:08:47.758 15:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # IFS=: 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # read -r var val 00:08:47.758 15:34:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:47.758 15:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # IFS=: 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # read -r var val 00:08:47.758 15:34:09 -- accel/accel.sh@21 -- # val=Yes 00:08:47.758 15:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # IFS=: 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # read -r var val 00:08:47.758 15:34:09 -- accel/accel.sh@21 -- # val= 00:08:47.758 15:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # IFS=: 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # read -r var val 00:08:47.758 15:34:09 -- accel/accel.sh@21 -- # val= 00:08:47.758 15:34:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # IFS=: 00:08:47.758 15:34:09 -- accel/accel.sh@20 -- # read -r var val 00:08:49.657 15:34:10 -- accel/accel.sh@21 -- # val= 00:08:49.657 15:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.657 15:34:10 -- accel/accel.sh@20 -- # IFS=: 00:08:49.657 15:34:10 -- accel/accel.sh@20 -- # read -r var val 00:08:49.657 15:34:10 -- accel/accel.sh@21 -- # val= 00:08:49.657 15:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.657 15:34:10 -- accel/accel.sh@20 -- # IFS=: 00:08:49.657 15:34:10 -- accel/accel.sh@20 -- # read -r var val 00:08:49.657 15:34:10 -- accel/accel.sh@21 -- # val= 00:08:49.657 15:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.657 15:34:10 -- accel/accel.sh@20 -- # IFS=: 00:08:49.657 15:34:10 -- accel/accel.sh@20 -- # read -r var val 00:08:49.657 15:34:10 -- accel/accel.sh@21 -- # val= 00:08:49.657 15:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.657 15:34:10 -- accel/accel.sh@20 -- # IFS=: 00:08:49.657 15:34:10 -- accel/accel.sh@20 -- # read -r var val 00:08:49.657 15:34:10 -- accel/accel.sh@21 -- # val= 00:08:49.657 15:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.657 15:34:10 -- accel/accel.sh@20 -- # IFS=: 00:08:49.657 15:34:10 -- accel/accel.sh@20 -- # read -r var val 00:08:49.657 15:34:10 -- accel/accel.sh@21 -- # val= 00:08:49.657 15:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.657 15:34:10 -- accel/accel.sh@20 -- # IFS=: 00:08:49.657 15:34:10 -- accel/accel.sh@20 -- # read -r var val 00:08:49.657 ************************************ 00:08:49.657 END TEST accel_decmop_full 00:08:49.657 ************************************ 00:08:49.657 15:34:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:49.657 15:34:10 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:49.657 15:34:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:49.657 00:08:49.657 real 0m4.886s 00:08:49.657 user 0m4.387s 00:08:49.657 sys 0m0.283s 00:08:49.657 15:34:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.657 15:34:10 -- common/autotest_common.sh@10 -- # set +x 00:08:49.657 15:34:10 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:49.657 15:34:10 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:49.657 15:34:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:49.657 15:34:10 -- common/autotest_common.sh@10 -- # set +x 00:08:49.657 ************************************ 00:08:49.657 START TEST accel_decomp_mcore 00:08:49.657 ************************************ 00:08:49.657 15:34:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:49.657 15:34:10 -- accel/accel.sh@16 -- # local accel_opc 00:08:49.657 15:34:10 -- accel/accel.sh@17 -- # local accel_module 00:08:49.657 15:34:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:49.657 15:34:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:49.657 15:34:11 -- accel/accel.sh@12 -- # build_accel_config 00:08:49.657 15:34:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:49.658 15:34:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:49.658 15:34:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:49.658 15:34:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:49.658 15:34:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:49.658 15:34:11 -- accel/accel.sh@41 -- # local IFS=, 00:08:49.658 15:34:11 -- accel/accel.sh@42 -- # jq -r . 00:08:49.658 [2024-07-24 15:34:11.049577] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:49.658 [2024-07-24 15:34:11.049763] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60220 ] 00:08:49.658 [2024-07-24 15:34:11.231597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.916 [2024-07-24 15:34:11.421043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.916 [2024-07-24 15:34:11.421105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.916 [2024-07-24 15:34:11.421243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.916 [2024-07-24 15:34:11.421246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.446 15:34:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:52.446 00:08:52.446 SPDK Configuration: 00:08:52.446 Core mask: 0xf 00:08:52.446 00:08:52.446 Accel Perf Configuration: 00:08:52.446 Workload Type: decompress 00:08:52.446 Transfer size: 4096 bytes 00:08:52.446 Vector count 1 00:08:52.446 Module: software 00:08:52.446 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:52.446 Queue depth: 32 00:08:52.446 Allocate depth: 32 00:08:52.446 # threads/core: 1 00:08:52.446 Run time: 1 seconds 00:08:52.446 Verify: Yes 00:08:52.446 00:08:52.446 Running for 1 seconds... 00:08:52.446 00:08:52.446 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:52.446 ------------------------------------------------------------------------------------ 00:08:52.446 0,0 53472/s 98 MiB/s 0 0 00:08:52.446 3,0 52864/s 97 MiB/s 0 0 00:08:52.446 2,0 52640/s 97 MiB/s 0 0 00:08:52.446 1,0 53472/s 98 MiB/s 0 0 00:08:52.446 ==================================================================================== 00:08:52.446 Total 212448/s 829 MiB/s 0 0' 00:08:52.446 15:34:13 -- accel/accel.sh@20 -- # IFS=: 00:08:52.446 15:34:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:52.446 15:34:13 -- accel/accel.sh@20 -- # read -r var val 00:08:52.447 15:34:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:52.447 15:34:13 -- accel/accel.sh@12 -- # build_accel_config 00:08:52.447 15:34:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:52.447 15:34:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:52.447 15:34:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:52.447 15:34:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:52.447 15:34:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:52.447 15:34:13 -- accel/accel.sh@41 -- # local IFS=, 00:08:52.447 15:34:13 -- accel/accel.sh@42 -- # jq -r . 00:08:52.447 [2024-07-24 15:34:13.518406] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:52.447 [2024-07-24 15:34:13.518758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60249 ] 00:08:52.447 [2024-07-24 15:34:13.688966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.447 [2024-07-24 15:34:13.869585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.447 [2024-07-24 15:34:13.869724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.447 [2024-07-24 15:34:13.869844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.447 [2024-07-24 15:34:13.869993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.703 15:34:14 -- accel/accel.sh@21 -- # val= 00:08:52.704 15:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # IFS=: 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # read -r var val 00:08:52.704 15:34:14 -- accel/accel.sh@21 -- # val= 00:08:52.704 15:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # IFS=: 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # read -r var val 00:08:52.704 15:34:14 -- accel/accel.sh@21 -- # val= 00:08:52.704 15:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # IFS=: 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # read -r var val 00:08:52.704 15:34:14 -- accel/accel.sh@21 -- # val=0xf 00:08:52.704 15:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # IFS=: 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # read -r var val 00:08:52.704 15:34:14 -- accel/accel.sh@21 -- # val= 00:08:52.704 15:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # IFS=: 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # read -r var val 00:08:52.704 15:34:14 -- accel/accel.sh@21 -- # val= 00:08:52.704 15:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # IFS=: 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # read -r var val 00:08:52.704 15:34:14 -- accel/accel.sh@21 -- # val=decompress 00:08:52.704 15:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.704 15:34:14 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # IFS=: 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # read -r var val 00:08:52.704 15:34:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:52.704 15:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # IFS=: 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # read -r var val 00:08:52.704 15:34:14 -- accel/accel.sh@21 -- # val= 00:08:52.704 15:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # IFS=: 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # read -r var val 00:08:52.704 15:34:14 -- accel/accel.sh@21 -- # val=software 00:08:52.704 15:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.704 15:34:14 -- accel/accel.sh@23 -- # accel_module=software 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # IFS=: 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # read -r var val 00:08:52.704 15:34:14 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:52.704 15:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # IFS=: 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # read -r var val 00:08:52.704 15:34:14 -- accel/accel.sh@21 -- # val=32 00:08:52.704 15:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # IFS=: 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # read -r var val 00:08:52.704 15:34:14 -- accel/accel.sh@21 -- # val=32 00:08:52.704 15:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # IFS=: 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # read -r var val 00:08:52.704 15:34:14 -- accel/accel.sh@21 -- # val=1 00:08:52.704 15:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # IFS=: 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # read -r var val 00:08:52.704 15:34:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:52.704 15:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # IFS=: 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # read -r var val 00:08:52.704 15:34:14 -- accel/accel.sh@21 -- # val=Yes 00:08:52.704 15:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # IFS=: 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # read -r var val 00:08:52.704 15:34:14 -- accel/accel.sh@21 -- # val= 00:08:52.704 15:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # IFS=: 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # read -r var val 00:08:52.704 15:34:14 -- accel/accel.sh@21 -- # val= 00:08:52.704 15:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # IFS=: 00:08:52.704 15:34:14 -- accel/accel.sh@20 -- # read -r var val 00:08:54.601 15:34:15 -- accel/accel.sh@21 -- # val= 00:08:54.601 15:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.601 15:34:15 -- accel/accel.sh@20 -- # IFS=: 00:08:54.601 15:34:15 -- accel/accel.sh@20 -- # read -r var val 00:08:54.601 15:34:15 -- accel/accel.sh@21 -- # val= 00:08:54.601 15:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.601 15:34:15 -- accel/accel.sh@20 -- # IFS=: 00:08:54.601 15:34:15 -- accel/accel.sh@20 -- # read -r var val 00:08:54.601 15:34:15 -- accel/accel.sh@21 -- # val= 00:08:54.601 15:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.601 15:34:15 -- accel/accel.sh@20 -- # IFS=: 00:08:54.601 15:34:15 -- accel/accel.sh@20 -- # read -r var val 00:08:54.601 15:34:15 -- accel/accel.sh@21 -- # val= 00:08:54.601 15:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.601 15:34:15 -- accel/accel.sh@20 -- # IFS=: 00:08:54.602 15:34:15 -- accel/accel.sh@20 -- # read -r var val 00:08:54.602 15:34:15 -- accel/accel.sh@21 -- # val= 00:08:54.602 15:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.602 15:34:15 -- accel/accel.sh@20 -- # IFS=: 00:08:54.602 15:34:15 -- accel/accel.sh@20 -- # read -r var val 00:08:54.602 15:34:15 -- accel/accel.sh@21 -- # val= 00:08:54.602 15:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.602 15:34:15 -- accel/accel.sh@20 -- # IFS=: 00:08:54.602 15:34:15 -- accel/accel.sh@20 -- # read -r var val 00:08:54.602 15:34:15 -- accel/accel.sh@21 -- # val= 00:08:54.602 15:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.602 15:34:15 -- accel/accel.sh@20 -- # IFS=: 00:08:54.602 15:34:15 -- accel/accel.sh@20 -- # read -r var val 00:08:54.602 15:34:15 -- accel/accel.sh@21 -- # val= 00:08:54.602 15:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.602 15:34:15 -- accel/accel.sh@20 -- # IFS=: 00:08:54.602 15:34:15 -- accel/accel.sh@20 -- # read -r var val 00:08:54.602 15:34:15 -- accel/accel.sh@21 -- # val= 00:08:54.602 15:34:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.602 15:34:15 -- accel/accel.sh@20 -- # IFS=: 00:08:54.602 15:34:15 -- accel/accel.sh@20 -- # read -r var val 00:08:54.602 15:34:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:54.602 15:34:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:54.602 15:34:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:54.602 00:08:54.602 real 0m4.887s 00:08:54.602 user 0m14.295s 00:08:54.602 sys 0m0.348s 00:08:54.602 ************************************ 00:08:54.602 END TEST accel_decomp_mcore 00:08:54.602 ************************************ 00:08:54.602 15:34:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.602 15:34:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.602 15:34:15 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:54.602 15:34:15 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:54.602 15:34:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:54.602 15:34:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.602 ************************************ 00:08:54.602 START TEST accel_decomp_full_mcore 00:08:54.602 ************************************ 00:08:54.602 15:34:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:54.602 15:34:15 -- accel/accel.sh@16 -- # local accel_opc 00:08:54.602 15:34:15 -- accel/accel.sh@17 -- # local accel_module 00:08:54.602 15:34:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:54.602 15:34:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:54.602 15:34:15 -- accel/accel.sh@12 -- # build_accel_config 00:08:54.602 15:34:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:54.602 15:34:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:54.602 15:34:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:54.602 15:34:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:54.602 15:34:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:54.602 15:34:15 -- accel/accel.sh@41 -- # local IFS=, 00:08:54.602 15:34:15 -- accel/accel.sh@42 -- # jq -r . 00:08:54.602 [2024-07-24 15:34:15.975767] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:54.602 [2024-07-24 15:34:15.975898] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60293 ] 00:08:54.602 [2024-07-24 15:34:16.140438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:54.860 [2024-07-24 15:34:16.327723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.860 [2024-07-24 15:34:16.327884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.860 [2024-07-24 15:34:16.328009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.860 [2024-07-24 15:34:16.328022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.388 15:34:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:57.388 00:08:57.388 SPDK Configuration: 00:08:57.388 Core mask: 0xf 00:08:57.388 00:08:57.388 Accel Perf Configuration: 00:08:57.388 Workload Type: decompress 00:08:57.388 Transfer size: 111250 bytes 00:08:57.388 Vector count 1 00:08:57.388 Module: software 00:08:57.388 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:57.388 Queue depth: 32 00:08:57.388 Allocate depth: 32 00:08:57.388 # threads/core: 1 00:08:57.388 Run time: 1 seconds 00:08:57.388 Verify: Yes 00:08:57.388 00:08:57.388 Running for 1 seconds... 00:08:57.388 00:08:57.388 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:57.388 ------------------------------------------------------------------------------------ 00:08:57.388 0,0 4256/s 175 MiB/s 0 0 00:08:57.388 3,0 4256/s 175 MiB/s 0 0 00:08:57.388 2,0 4256/s 175 MiB/s 0 0 00:08:57.388 1,0 4224/s 174 MiB/s 0 0 00:08:57.388 ==================================================================================== 00:08:57.388 Total 16992/s 1802 MiB/s 0 0' 00:08:57.388 15:34:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.388 15:34:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:57.388 15:34:18 -- accel/accel.sh@12 -- # build_accel_config 00:08:57.388 15:34:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:57.388 15:34:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:57.388 15:34:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:57.388 15:34:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:57.388 15:34:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:57.388 15:34:18 -- accel/accel.sh@41 -- # local IFS=, 00:08:57.388 15:34:18 -- accel/accel.sh@42 -- # jq -r . 00:08:57.388 [2024-07-24 15:34:18.435477] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:57.388 [2024-07-24 15:34:18.435670] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60333 ] 00:08:57.388 [2024-07-24 15:34:18.604256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.388 [2024-07-24 15:34:18.791063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.388 [2024-07-24 15:34:18.791192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.388 [2024-07-24 15:34:18.791318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.388 [2024-07-24 15:34:18.791330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.388 15:34:18 -- accel/accel.sh@21 -- # val= 00:08:57.388 15:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.388 15:34:18 -- accel/accel.sh@21 -- # val= 00:08:57.388 15:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.388 15:34:18 -- accel/accel.sh@21 -- # val= 00:08:57.388 15:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.388 15:34:18 -- accel/accel.sh@21 -- # val=0xf 00:08:57.388 15:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.388 15:34:18 -- accel/accel.sh@21 -- # val= 00:08:57.388 15:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.388 15:34:18 -- accel/accel.sh@21 -- # val= 00:08:57.388 15:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.388 15:34:18 -- accel/accel.sh@21 -- # val=decompress 00:08:57.388 15:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.388 15:34:18 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.388 15:34:18 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:57.388 15:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.388 15:34:18 -- accel/accel.sh@21 -- # val= 00:08:57.388 15:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.388 15:34:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.646 15:34:18 -- accel/accel.sh@21 -- # val=software 00:08:57.646 15:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.646 15:34:18 -- accel/accel.sh@23 -- # accel_module=software 00:08:57.646 15:34:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.646 15:34:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.646 15:34:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:57.646 15:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.646 15:34:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.646 15:34:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.646 15:34:18 -- accel/accel.sh@21 -- # val=32 00:08:57.646 15:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.646 15:34:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.646 15:34:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.646 15:34:18 -- accel/accel.sh@21 -- # val=32 00:08:57.646 15:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.646 15:34:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.646 15:34:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.646 15:34:18 -- accel/accel.sh@21 -- # val=1 00:08:57.646 15:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.646 15:34:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.646 15:34:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.646 15:34:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:57.646 15:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.646 15:34:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.646 15:34:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.646 15:34:18 -- accel/accel.sh@21 -- # val=Yes 00:08:57.646 15:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.646 15:34:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.646 15:34:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.646 15:34:18 -- accel/accel.sh@21 -- # val= 00:08:57.646 15:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.646 15:34:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.646 15:34:18 -- accel/accel.sh@20 -- # read -r var val 00:08:57.646 15:34:18 -- accel/accel.sh@21 -- # val= 00:08:57.646 15:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.646 15:34:18 -- accel/accel.sh@20 -- # IFS=: 00:08:57.646 15:34:18 -- accel/accel.sh@20 -- # read -r var val 00:08:59.548 15:34:20 -- accel/accel.sh@21 -- # val= 00:08:59.548 15:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.548 15:34:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.548 15:34:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.548 15:34:20 -- accel/accel.sh@21 -- # val= 00:08:59.548 15:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.548 15:34:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.548 15:34:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.548 15:34:20 -- accel/accel.sh@21 -- # val= 00:08:59.548 15:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.548 15:34:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.548 15:34:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.548 15:34:20 -- accel/accel.sh@21 -- # val= 00:08:59.548 15:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.548 15:34:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.548 15:34:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.548 15:34:20 -- accel/accel.sh@21 -- # val= 00:08:59.548 15:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.548 15:34:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.548 15:34:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.548 15:34:20 -- accel/accel.sh@21 -- # val= 00:08:59.548 15:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.548 15:34:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.548 15:34:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.548 15:34:20 -- accel/accel.sh@21 -- # val= 00:08:59.548 15:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.548 15:34:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.548 15:34:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.548 15:34:20 -- accel/accel.sh@21 -- # val= 00:08:59.548 15:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.548 15:34:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.548 15:34:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.548 15:34:20 -- accel/accel.sh@21 -- # val= 00:08:59.548 15:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.548 15:34:20 -- accel/accel.sh@20 -- # IFS=: 00:08:59.548 15:34:20 -- accel/accel.sh@20 -- # read -r var val 00:08:59.548 15:34:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:59.548 15:34:20 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:59.548 15:34:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:59.548 00:08:59.548 real 0m4.957s 00:08:59.548 user 0m14.634s 00:08:59.548 sys 0m0.320s 00:08:59.548 15:34:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.548 15:34:20 -- common/autotest_common.sh@10 -- # set +x 00:08:59.548 ************************************ 00:08:59.548 END TEST accel_decomp_full_mcore 00:08:59.548 ************************************ 00:08:59.548 15:34:20 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:59.548 15:34:20 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:59.548 15:34:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:59.548 15:34:20 -- common/autotest_common.sh@10 -- # set +x 00:08:59.548 ************************************ 00:08:59.548 START TEST accel_decomp_mthread 00:08:59.548 ************************************ 00:08:59.548 15:34:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:59.548 15:34:20 -- accel/accel.sh@16 -- # local accel_opc 00:08:59.549 15:34:20 -- accel/accel.sh@17 -- # local accel_module 00:08:59.549 15:34:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:59.549 15:34:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:59.549 15:34:20 -- accel/accel.sh@12 -- # build_accel_config 00:08:59.549 15:34:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:59.549 15:34:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:59.549 15:34:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:59.549 15:34:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:59.549 15:34:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:59.549 15:34:20 -- accel/accel.sh@41 -- # local IFS=, 00:08:59.549 15:34:20 -- accel/accel.sh@42 -- # jq -r . 00:08:59.549 [2024-07-24 15:34:20.983731] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:59.549 [2024-07-24 15:34:20.983871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60377 ] 00:08:59.806 [2024-07-24 15:34:21.147174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.806 [2024-07-24 15:34:21.331994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.333 15:34:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:02.333 00:09:02.333 SPDK Configuration: 00:09:02.333 Core mask: 0x1 00:09:02.333 00:09:02.333 Accel Perf Configuration: 00:09:02.333 Workload Type: decompress 00:09:02.333 Transfer size: 4096 bytes 00:09:02.333 Vector count 1 00:09:02.333 Module: software 00:09:02.333 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:02.333 Queue depth: 32 00:09:02.333 Allocate depth: 32 00:09:02.333 # threads/core: 2 00:09:02.333 Run time: 1 seconds 00:09:02.333 Verify: Yes 00:09:02.333 00:09:02.333 Running for 1 seconds... 00:09:02.333 00:09:02.333 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:02.333 ------------------------------------------------------------------------------------ 00:09:02.333 0,1 28960/s 53 MiB/s 0 0 00:09:02.333 0,0 28864/s 53 MiB/s 0 0 00:09:02.333 ==================================================================================== 00:09:02.333 Total 57824/s 225 MiB/s 0 0' 00:09:02.333 15:34:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.333 15:34:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.333 15:34:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:02.333 15:34:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:02.333 15:34:23 -- accel/accel.sh@12 -- # build_accel_config 00:09:02.333 15:34:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:02.333 15:34:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:02.333 15:34:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:02.333 15:34:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:02.333 15:34:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:02.333 15:34:23 -- accel/accel.sh@41 -- # local IFS=, 00:09:02.333 15:34:23 -- accel/accel.sh@42 -- # jq -r . 00:09:02.333 [2024-07-24 15:34:23.393335] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:02.333 [2024-07-24 15:34:23.393483] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60409 ] 00:09:02.333 [2024-07-24 15:34:23.562114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.333 [2024-07-24 15:34:23.746523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.591 15:34:23 -- accel/accel.sh@21 -- # val= 00:09:02.591 15:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.591 15:34:23 -- accel/accel.sh@21 -- # val= 00:09:02.591 15:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.591 15:34:23 -- accel/accel.sh@21 -- # val= 00:09:02.591 15:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.591 15:34:23 -- accel/accel.sh@21 -- # val=0x1 00:09:02.591 15:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.591 15:34:23 -- accel/accel.sh@21 -- # val= 00:09:02.591 15:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.591 15:34:23 -- accel/accel.sh@21 -- # val= 00:09:02.591 15:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.591 15:34:23 -- accel/accel.sh@21 -- # val=decompress 00:09:02.591 15:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.591 15:34:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.591 15:34:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:02.591 15:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.591 15:34:23 -- accel/accel.sh@21 -- # val= 00:09:02.591 15:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.591 15:34:23 -- accel/accel.sh@21 -- # val=software 00:09:02.591 15:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.591 15:34:23 -- accel/accel.sh@23 -- # accel_module=software 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.591 15:34:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:02.591 15:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.591 15:34:23 -- accel/accel.sh@21 -- # val=32 00:09:02.591 15:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.591 15:34:23 -- accel/accel.sh@21 -- # val=32 00:09:02.591 15:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.591 15:34:23 -- accel/accel.sh@21 -- # val=2 00:09:02.591 15:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.591 15:34:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:02.591 15:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.591 15:34:23 -- accel/accel.sh@21 -- # val=Yes 00:09:02.591 15:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.591 15:34:23 -- accel/accel.sh@21 -- # val= 00:09:02.591 15:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # read -r var val 00:09:02.591 15:34:23 -- accel/accel.sh@21 -- # val= 00:09:02.591 15:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # IFS=: 00:09:02.591 15:34:23 -- accel/accel.sh@20 -- # read -r var val 00:09:04.490 15:34:25 -- accel/accel.sh@21 -- # val= 00:09:04.490 15:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.490 15:34:25 -- accel/accel.sh@20 -- # IFS=: 00:09:04.490 15:34:25 -- accel/accel.sh@20 -- # read -r var val 00:09:04.490 15:34:25 -- accel/accel.sh@21 -- # val= 00:09:04.490 15:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.490 15:34:25 -- accel/accel.sh@20 -- # IFS=: 00:09:04.490 15:34:25 -- accel/accel.sh@20 -- # read -r var val 00:09:04.490 15:34:25 -- accel/accel.sh@21 -- # val= 00:09:04.490 15:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.490 15:34:25 -- accel/accel.sh@20 -- # IFS=: 00:09:04.490 15:34:25 -- accel/accel.sh@20 -- # read -r var val 00:09:04.490 15:34:25 -- accel/accel.sh@21 -- # val= 00:09:04.490 15:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.490 15:34:25 -- accel/accel.sh@20 -- # IFS=: 00:09:04.490 15:34:25 -- accel/accel.sh@20 -- # read -r var val 00:09:04.490 15:34:25 -- accel/accel.sh@21 -- # val= 00:09:04.490 15:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.490 15:34:25 -- accel/accel.sh@20 -- # IFS=: 00:09:04.490 15:34:25 -- accel/accel.sh@20 -- # read -r var val 00:09:04.490 15:34:25 -- accel/accel.sh@21 -- # val= 00:09:04.490 15:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.490 15:34:25 -- accel/accel.sh@20 -- # IFS=: 00:09:04.490 15:34:25 -- accel/accel.sh@20 -- # read -r var val 00:09:04.490 15:34:25 -- accel/accel.sh@21 -- # val= 00:09:04.490 15:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.490 15:34:25 -- accel/accel.sh@20 -- # IFS=: 00:09:04.490 15:34:25 -- accel/accel.sh@20 -- # read -r var val 00:09:04.490 15:34:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:04.490 15:34:25 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:04.490 15:34:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:04.490 00:09:04.490 real 0m4.818s 00:09:04.491 user 0m4.335s 00:09:04.491 sys 0m0.269s 00:09:04.491 15:34:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.491 15:34:25 -- common/autotest_common.sh@10 -- # set +x 00:09:04.491 ************************************ 00:09:04.491 END TEST accel_decomp_mthread 00:09:04.491 ************************************ 00:09:04.491 15:34:25 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:04.491 15:34:25 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:04.491 15:34:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:04.491 15:34:25 -- common/autotest_common.sh@10 -- # set +x 00:09:04.491 ************************************ 00:09:04.491 START TEST accel_deomp_full_mthread 00:09:04.491 ************************************ 00:09:04.491 15:34:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:04.491 15:34:25 -- accel/accel.sh@16 -- # local accel_opc 00:09:04.491 15:34:25 -- accel/accel.sh@17 -- # local accel_module 00:09:04.491 15:34:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:04.491 15:34:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:04.491 15:34:25 -- accel/accel.sh@12 -- # build_accel_config 00:09:04.491 15:34:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:04.491 15:34:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:04.491 15:34:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:04.491 15:34:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:04.491 15:34:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:04.491 15:34:25 -- accel/accel.sh@41 -- # local IFS=, 00:09:04.491 15:34:25 -- accel/accel.sh@42 -- # jq -r . 00:09:04.491 [2024-07-24 15:34:25.856485] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:04.491 [2024-07-24 15:34:25.856652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60455 ] 00:09:04.491 [2024-07-24 15:34:26.025535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.749 [2024-07-24 15:34:26.208740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.277 15:34:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:07.277 00:09:07.277 SPDK Configuration: 00:09:07.277 Core mask: 0x1 00:09:07.277 00:09:07.277 Accel Perf Configuration: 00:09:07.277 Workload Type: decompress 00:09:07.277 Transfer size: 111250 bytes 00:09:07.277 Vector count 1 00:09:07.277 Module: software 00:09:07.277 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:07.277 Queue depth: 32 00:09:07.277 Allocate depth: 32 00:09:07.277 # threads/core: 2 00:09:07.277 Run time: 1 seconds 00:09:07.277 Verify: Yes 00:09:07.277 00:09:07.277 Running for 1 seconds... 00:09:07.277 00:09:07.277 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:07.277 ------------------------------------------------------------------------------------ 00:09:07.277 0,1 2176/s 89 MiB/s 0 0 00:09:07.277 0,0 2144/s 88 MiB/s 0 0 00:09:07.277 ==================================================================================== 00:09:07.278 Total 4320/s 458 MiB/s 0 0' 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # IFS=: 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # read -r var val 00:09:07.278 15:34:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:07.278 15:34:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:07.278 15:34:28 -- accel/accel.sh@12 -- # build_accel_config 00:09:07.278 15:34:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:07.278 15:34:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:07.278 15:34:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:07.278 15:34:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:07.278 15:34:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:07.278 15:34:28 -- accel/accel.sh@41 -- # local IFS=, 00:09:07.278 15:34:28 -- accel/accel.sh@42 -- # jq -r . 00:09:07.278 [2024-07-24 15:34:28.306114] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:07.278 [2024-07-24 15:34:28.306241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60487 ] 00:09:07.278 [2024-07-24 15:34:28.465303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.278 [2024-07-24 15:34:28.667274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.278 15:34:28 -- accel/accel.sh@21 -- # val= 00:09:07.278 15:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # IFS=: 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # read -r var val 00:09:07.278 15:34:28 -- accel/accel.sh@21 -- # val= 00:09:07.278 15:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # IFS=: 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # read -r var val 00:09:07.278 15:34:28 -- accel/accel.sh@21 -- # val= 00:09:07.278 15:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # IFS=: 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # read -r var val 00:09:07.278 15:34:28 -- accel/accel.sh@21 -- # val=0x1 00:09:07.278 15:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # IFS=: 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # read -r var val 00:09:07.278 15:34:28 -- accel/accel.sh@21 -- # val= 00:09:07.278 15:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # IFS=: 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # read -r var val 00:09:07.278 15:34:28 -- accel/accel.sh@21 -- # val= 00:09:07.278 15:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # IFS=: 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # read -r var val 00:09:07.278 15:34:28 -- accel/accel.sh@21 -- # val=decompress 00:09:07.278 15:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.278 15:34:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # IFS=: 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # read -r var val 00:09:07.278 15:34:28 -- accel/accel.sh@21 -- # val='111250 bytes' 00:09:07.278 15:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # IFS=: 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # read -r var val 00:09:07.278 15:34:28 -- accel/accel.sh@21 -- # val= 00:09:07.278 15:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # IFS=: 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # read -r var val 00:09:07.278 15:34:28 -- accel/accel.sh@21 -- # val=software 00:09:07.278 15:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.278 15:34:28 -- accel/accel.sh@23 -- # accel_module=software 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # IFS=: 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # read -r var val 00:09:07.278 15:34:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:07.278 15:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # IFS=: 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # read -r var val 00:09:07.278 15:34:28 -- accel/accel.sh@21 -- # val=32 00:09:07.278 15:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # IFS=: 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # read -r var val 00:09:07.278 15:34:28 -- accel/accel.sh@21 -- # val=32 00:09:07.278 15:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # IFS=: 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # read -r var val 00:09:07.278 15:34:28 -- accel/accel.sh@21 -- # val=2 00:09:07.278 15:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # IFS=: 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # read -r var val 00:09:07.278 15:34:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:07.278 15:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # IFS=: 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # read -r var val 00:09:07.278 15:34:28 -- accel/accel.sh@21 -- # val=Yes 00:09:07.278 15:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # IFS=: 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # read -r var val 00:09:07.278 15:34:28 -- accel/accel.sh@21 -- # val= 00:09:07.278 15:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # IFS=: 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # read -r var val 00:09:07.278 15:34:28 -- accel/accel.sh@21 -- # val= 00:09:07.278 15:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # IFS=: 00:09:07.278 15:34:28 -- accel/accel.sh@20 -- # read -r var val 00:09:09.179 15:34:30 -- accel/accel.sh@21 -- # val= 00:09:09.179 15:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.179 15:34:30 -- accel/accel.sh@20 -- # IFS=: 00:09:09.179 15:34:30 -- accel/accel.sh@20 -- # read -r var val 00:09:09.179 15:34:30 -- accel/accel.sh@21 -- # val= 00:09:09.179 15:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.179 15:34:30 -- accel/accel.sh@20 -- # IFS=: 00:09:09.179 15:34:30 -- accel/accel.sh@20 -- # read -r var val 00:09:09.179 15:34:30 -- accel/accel.sh@21 -- # val= 00:09:09.179 15:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.179 15:34:30 -- accel/accel.sh@20 -- # IFS=: 00:09:09.179 15:34:30 -- accel/accel.sh@20 -- # read -r var val 00:09:09.179 15:34:30 -- accel/accel.sh@21 -- # val= 00:09:09.179 15:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.179 15:34:30 -- accel/accel.sh@20 -- # IFS=: 00:09:09.179 15:34:30 -- accel/accel.sh@20 -- # read -r var val 00:09:09.179 15:34:30 -- accel/accel.sh@21 -- # val= 00:09:09.179 15:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.179 15:34:30 -- accel/accel.sh@20 -- # IFS=: 00:09:09.179 15:34:30 -- accel/accel.sh@20 -- # read -r var val 00:09:09.179 15:34:30 -- accel/accel.sh@21 -- # val= 00:09:09.179 15:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.179 15:34:30 -- accel/accel.sh@20 -- # IFS=: 00:09:09.179 15:34:30 -- accel/accel.sh@20 -- # read -r var val 00:09:09.179 15:34:30 -- accel/accel.sh@21 -- # val= 00:09:09.179 15:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.179 15:34:30 -- accel/accel.sh@20 -- # IFS=: 00:09:09.179 15:34:30 -- accel/accel.sh@20 -- # read -r var val 00:09:09.179 15:34:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:09.179 15:34:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:09.179 15:34:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:09.179 00:09:09.179 real 0m4.885s 00:09:09.179 user 0m4.374s 00:09:09.179 sys 0m0.301s 00:09:09.179 15:34:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.179 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:09:09.179 ************************************ 00:09:09.179 END TEST accel_deomp_full_mthread 00:09:09.179 ************************************ 00:09:09.179 15:34:30 -- accel/accel.sh@116 -- # [[ n == y ]] 00:09:09.179 15:34:30 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:09.179 15:34:30 -- accel/accel.sh@129 -- # build_accel_config 00:09:09.179 15:34:30 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:09.179 15:34:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:09.179 15:34:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:09.179 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:09:09.179 15:34:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:09.179 15:34:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:09.179 15:34:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:09.179 15:34:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:09.179 15:34:30 -- accel/accel.sh@41 -- # local IFS=, 00:09:09.179 15:34:30 -- accel/accel.sh@42 -- # jq -r . 00:09:09.179 ************************************ 00:09:09.179 START TEST accel_dif_functional_tests 00:09:09.179 ************************************ 00:09:09.179 15:34:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:09.438 [2024-07-24 15:34:30.835028] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:09.438 [2024-07-24 15:34:30.835232] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60529 ] 00:09:09.438 [2024-07-24 15:34:31.005348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:09.696 [2024-07-24 15:34:31.193526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.696 [2024-07-24 15:34:31.193684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.696 [2024-07-24 15:34:31.193695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.954 00:09:09.954 00:09:09.954 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.954 http://cunit.sourceforge.net/ 00:09:09.954 00:09:09.954 00:09:09.954 Suite: accel_dif 00:09:09.954 Test: verify: DIF generated, GUARD check ...passed 00:09:09.954 Test: verify: DIF generated, APPTAG check ...passed 00:09:09.954 Test: verify: DIF generated, REFTAG check ...passed 00:09:09.954 Test: verify: DIF not generated, GUARD check ...passed 00:09:09.954 Test: verify: DIF not generated, APPTAG check ...[2024-07-24 15:34:31.464577] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:09.954 [2024-07-24 15:34:31.464657] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:09.954 [2024-07-24 15:34:31.464724] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:09.954 passed 00:09:09.954 Test: verify: DIF not generated, REFTAG check ...[2024-07-24 15:34:31.464900] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:09.954 passed 00:09:09.954 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:09.954 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-24 15:34:31.464962] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:09.954 [2024-07-24 15:34:31.464996] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:09.954 [2024-07-24 15:34:31.465241] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:09.954 passed 00:09:09.954 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:09.954 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:09.954 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:09.954 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:09:09.954 Test: generate copy: DIF generated, GUARD check ...passed 00:09:09.954 Test: generate copy: DIF generated, APTTAG check ...[2024-07-24 15:34:31.465628] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:09.954 passed 00:09:09.954 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:09.954 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:09.954 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:09.954 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:09.954 Test: generate copy: iovecs-len validate ...passed 00:09:09.954 Test: generate copy: buffer alignment validate ...passed[2024-07-24 15:34:31.466408] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:09.954 00:09:09.954 00:09:09.954 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.954 suites 1 1 n/a 0 0 00:09:09.954 tests 20 20 20 0 0 00:09:09.954 asserts 204 204 204 0 n/a 00:09:09.954 00:09:09.954 Elapsed time = 0.007 seconds 00:09:11.330 00:09:11.330 real 0m1.781s 00:09:11.330 user 0m3.395s 00:09:11.330 sys 0m0.210s 00:09:11.330 ************************************ 00:09:11.330 END TEST accel_dif_functional_tests 00:09:11.330 ************************************ 00:09:11.330 15:34:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.330 15:34:32 -- common/autotest_common.sh@10 -- # set +x 00:09:11.330 00:09:11.330 real 1m46.108s 00:09:11.330 user 1m56.668s 00:09:11.330 sys 0m7.736s 00:09:11.330 ************************************ 00:09:11.330 END TEST accel 00:09:11.330 ************************************ 00:09:11.330 15:34:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.330 15:34:32 -- common/autotest_common.sh@10 -- # set +x 00:09:11.330 15:34:32 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:11.330 15:34:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:11.331 15:34:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:11.331 15:34:32 -- common/autotest_common.sh@10 -- # set +x 00:09:11.331 ************************************ 00:09:11.331 START TEST accel_rpc 00:09:11.331 ************************************ 00:09:11.331 15:34:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:11.331 * Looking for test storage... 00:09:11.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:11.331 15:34:32 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:11.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.331 15:34:32 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=60610 00:09:11.331 15:34:32 -- accel/accel_rpc.sh@15 -- # waitforlisten 60610 00:09:11.331 15:34:32 -- common/autotest_common.sh@819 -- # '[' -z 60610 ']' 00:09:11.331 15:34:32 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:11.331 15:34:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.331 15:34:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:11.331 15:34:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.331 15:34:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:11.331 15:34:32 -- common/autotest_common.sh@10 -- # set +x 00:09:11.331 [2024-07-24 15:34:32.799438] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:11.331 [2024-07-24 15:34:32.799594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60610 ] 00:09:11.589 [2024-07-24 15:34:32.965427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.589 [2024-07-24 15:34:33.144730] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:11.589 [2024-07-24 15:34:33.144954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.156 15:34:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:12.156 15:34:33 -- common/autotest_common.sh@852 -- # return 0 00:09:12.156 15:34:33 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:12.156 15:34:33 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:12.156 15:34:33 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:12.156 15:34:33 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:12.156 15:34:33 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:12.156 15:34:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:12.156 15:34:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:12.156 15:34:33 -- common/autotest_common.sh@10 -- # set +x 00:09:12.156 ************************************ 00:09:12.156 START TEST accel_assign_opcode 00:09:12.156 ************************************ 00:09:12.156 15:34:33 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:09:12.156 15:34:33 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:12.156 15:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.156 15:34:33 -- common/autotest_common.sh@10 -- # set +x 00:09:12.156 [2024-07-24 15:34:33.689739] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:12.156 15:34:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.156 15:34:33 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:12.156 15:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.156 15:34:33 -- common/autotest_common.sh@10 -- # set +x 00:09:12.156 [2024-07-24 15:34:33.697693] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:12.156 15:34:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.156 15:34:33 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:12.156 15:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.156 15:34:33 -- common/autotest_common.sh@10 -- # set +x 00:09:13.091 15:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.091 15:34:34 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:13.091 15:34:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.091 15:34:34 -- common/autotest_common.sh@10 -- # set +x 00:09:13.091 15:34:34 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:13.091 15:34:34 -- accel/accel_rpc.sh@42 -- # grep software 00:09:13.091 15:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.091 software 00:09:13.091 ************************************ 00:09:13.091 END TEST accel_assign_opcode 00:09:13.091 ************************************ 00:09:13.091 00:09:13.091 real 0m0.726s 00:09:13.091 user 0m0.050s 00:09:13.091 sys 0m0.013s 00:09:13.091 15:34:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.091 15:34:34 -- common/autotest_common.sh@10 -- # set +x 00:09:13.091 15:34:34 -- accel/accel_rpc.sh@55 -- # killprocess 60610 00:09:13.091 15:34:34 -- common/autotest_common.sh@926 -- # '[' -z 60610 ']' 00:09:13.091 15:34:34 -- common/autotest_common.sh@930 -- # kill -0 60610 00:09:13.091 15:34:34 -- common/autotest_common.sh@931 -- # uname 00:09:13.091 15:34:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:13.091 15:34:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60610 00:09:13.091 killing process with pid 60610 00:09:13.091 15:34:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:13.091 15:34:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:13.091 15:34:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60610' 00:09:13.091 15:34:34 -- common/autotest_common.sh@945 -- # kill 60610 00:09:13.091 15:34:34 -- common/autotest_common.sh@950 -- # wait 60610 00:09:14.993 00:09:14.993 real 0m3.856s 00:09:14.993 user 0m3.901s 00:09:14.993 sys 0m0.435s 00:09:14.993 ************************************ 00:09:14.993 END TEST accel_rpc 00:09:14.993 ************************************ 00:09:14.993 15:34:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.993 15:34:36 -- common/autotest_common.sh@10 -- # set +x 00:09:14.993 15:34:36 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:14.993 15:34:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:14.993 15:34:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:14.993 15:34:36 -- common/autotest_common.sh@10 -- # set +x 00:09:14.993 ************************************ 00:09:14.993 START TEST app_cmdline 00:09:14.993 ************************************ 00:09:14.993 15:34:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:14.993 * Looking for test storage... 00:09:15.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:15.251 15:34:36 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:15.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.251 15:34:36 -- app/cmdline.sh@17 -- # spdk_tgt_pid=60725 00:09:15.251 15:34:36 -- app/cmdline.sh@18 -- # waitforlisten 60725 00:09:15.251 15:34:36 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:15.251 15:34:36 -- common/autotest_common.sh@819 -- # '[' -z 60725 ']' 00:09:15.252 15:34:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.252 15:34:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:15.252 15:34:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.252 15:34:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:15.252 15:34:36 -- common/autotest_common.sh@10 -- # set +x 00:09:15.252 [2024-07-24 15:34:36.690784] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:15.252 [2024-07-24 15:34:36.690930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60725 ] 00:09:15.510 [2024-07-24 15:34:36.849055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.510 [2024-07-24 15:34:37.031748] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:15.510 [2024-07-24 15:34:37.032029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.882 15:34:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:16.882 15:34:38 -- common/autotest_common.sh@852 -- # return 0 00:09:16.882 15:34:38 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:17.140 { 00:09:17.140 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:09:17.140 "fields": { 00:09:17.140 "major": 24, 00:09:17.140 "minor": 1, 00:09:17.140 "patch": 1, 00:09:17.140 "suffix": "-pre", 00:09:17.140 "commit": "dbef7efac" 00:09:17.140 } 00:09:17.140 } 00:09:17.140 15:34:38 -- app/cmdline.sh@22 -- # expected_methods=() 00:09:17.140 15:34:38 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:17.140 15:34:38 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:17.140 15:34:38 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:17.140 15:34:38 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:17.140 15:34:38 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:17.140 15:34:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:17.140 15:34:38 -- app/cmdline.sh@26 -- # sort 00:09:17.140 15:34:38 -- common/autotest_common.sh@10 -- # set +x 00:09:17.140 15:34:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:17.140 15:34:38 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:17.140 15:34:38 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:17.140 15:34:38 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:17.140 15:34:38 -- common/autotest_common.sh@640 -- # local es=0 00:09:17.140 15:34:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:17.140 15:34:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:17.140 15:34:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:17.140 15:34:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:17.140 15:34:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:17.141 15:34:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:17.141 15:34:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:17.141 15:34:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:17.141 15:34:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:17.141 15:34:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:17.398 request: 00:09:17.398 { 00:09:17.398 "method": "env_dpdk_get_mem_stats", 00:09:17.398 "req_id": 1 00:09:17.398 } 00:09:17.398 Got JSON-RPC error response 00:09:17.398 response: 00:09:17.398 { 00:09:17.398 "code": -32601, 00:09:17.398 "message": "Method not found" 00:09:17.398 } 00:09:17.398 15:34:38 -- common/autotest_common.sh@643 -- # es=1 00:09:17.398 15:34:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:17.398 15:34:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:17.398 15:34:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:17.398 15:34:38 -- app/cmdline.sh@1 -- # killprocess 60725 00:09:17.398 15:34:38 -- common/autotest_common.sh@926 -- # '[' -z 60725 ']' 00:09:17.398 15:34:38 -- common/autotest_common.sh@930 -- # kill -0 60725 00:09:17.398 15:34:38 -- common/autotest_common.sh@931 -- # uname 00:09:17.398 15:34:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:17.398 15:34:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60725 00:09:17.398 killing process with pid 60725 00:09:17.398 15:34:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:17.398 15:34:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:17.398 15:34:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60725' 00:09:17.398 15:34:38 -- common/autotest_common.sh@945 -- # kill 60725 00:09:17.398 15:34:38 -- common/autotest_common.sh@950 -- # wait 60725 00:09:19.970 00:09:19.970 real 0m4.512s 00:09:19.970 user 0m5.195s 00:09:19.970 sys 0m0.530s 00:09:19.970 ************************************ 00:09:19.970 END TEST app_cmdline 00:09:19.970 ************************************ 00:09:19.970 15:34:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.970 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:09:19.970 15:34:41 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:19.970 15:34:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:19.970 15:34:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:19.970 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:09:19.970 ************************************ 00:09:19.970 START TEST version 00:09:19.970 ************************************ 00:09:19.970 15:34:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:19.970 * Looking for test storage... 00:09:19.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:19.970 15:34:41 -- app/version.sh@17 -- # get_header_version major 00:09:19.970 15:34:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:19.970 15:34:41 -- app/version.sh@14 -- # cut -f2 00:09:19.970 15:34:41 -- app/version.sh@14 -- # tr -d '"' 00:09:19.970 15:34:41 -- app/version.sh@17 -- # major=24 00:09:19.970 15:34:41 -- app/version.sh@18 -- # get_header_version minor 00:09:19.970 15:34:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:19.970 15:34:41 -- app/version.sh@14 -- # tr -d '"' 00:09:19.970 15:34:41 -- app/version.sh@14 -- # cut -f2 00:09:19.970 15:34:41 -- app/version.sh@18 -- # minor=1 00:09:19.970 15:34:41 -- app/version.sh@19 -- # get_header_version patch 00:09:19.970 15:34:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:19.970 15:34:41 -- app/version.sh@14 -- # cut -f2 00:09:19.970 15:34:41 -- app/version.sh@14 -- # tr -d '"' 00:09:19.970 15:34:41 -- app/version.sh@19 -- # patch=1 00:09:19.970 15:34:41 -- app/version.sh@20 -- # get_header_version suffix 00:09:19.970 15:34:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:19.970 15:34:41 -- app/version.sh@14 -- # cut -f2 00:09:19.970 15:34:41 -- app/version.sh@14 -- # tr -d '"' 00:09:19.970 15:34:41 -- app/version.sh@20 -- # suffix=-pre 00:09:19.970 15:34:41 -- app/version.sh@22 -- # version=24.1 00:09:19.970 15:34:41 -- app/version.sh@25 -- # (( patch != 0 )) 00:09:19.970 15:34:41 -- app/version.sh@25 -- # version=24.1.1 00:09:19.970 15:34:41 -- app/version.sh@28 -- # version=24.1.1rc0 00:09:19.970 15:34:41 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:19.970 15:34:41 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:19.970 15:34:41 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:09:19.970 15:34:41 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:09:19.970 00:09:19.970 real 0m0.150s 00:09:19.970 user 0m0.093s 00:09:19.970 sys 0m0.087s 00:09:19.970 ************************************ 00:09:19.970 END TEST version 00:09:19.970 ************************************ 00:09:19.970 15:34:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.970 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:09:19.970 15:34:41 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:09:19.970 15:34:41 -- spdk/autotest.sh@204 -- # uname -s 00:09:19.970 15:34:41 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:09:19.970 15:34:41 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:09:19.970 15:34:41 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:09:19.970 15:34:41 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:09:19.970 15:34:41 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:19.970 15:34:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:19.970 15:34:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:19.970 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:09:19.970 ************************************ 00:09:19.970 START TEST blockdev_nvme 00:09:19.970 ************************************ 00:09:19.971 15:34:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:19.971 * Looking for test storage... 00:09:19.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:19.971 15:34:41 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:19.971 15:34:41 -- bdev/nbd_common.sh@6 -- # set -e 00:09:19.971 15:34:41 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:19.971 15:34:41 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:19.971 15:34:41 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:19.971 15:34:41 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:19.971 15:34:41 -- bdev/blockdev.sh@18 -- # : 00:09:19.971 15:34:41 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:09:19.971 15:34:41 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:09:19.971 15:34:41 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:09:19.971 15:34:41 -- bdev/blockdev.sh@672 -- # uname -s 00:09:19.971 15:34:41 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:09:19.971 15:34:41 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:09:19.971 15:34:41 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:09:19.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.971 15:34:41 -- bdev/blockdev.sh@681 -- # crypto_device= 00:09:19.971 15:34:41 -- bdev/blockdev.sh@682 -- # dek= 00:09:19.971 15:34:41 -- bdev/blockdev.sh@683 -- # env_ctx= 00:09:19.971 15:34:41 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:09:19.971 15:34:41 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:09:19.971 15:34:41 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:09:19.971 15:34:41 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:09:19.971 15:34:41 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:09:19.971 15:34:41 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=60892 00:09:19.971 15:34:41 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:19.971 15:34:41 -- bdev/blockdev.sh@47 -- # waitforlisten 60892 00:09:19.971 15:34:41 -- common/autotest_common.sh@819 -- # '[' -z 60892 ']' 00:09:19.971 15:34:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.971 15:34:41 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:19.971 15:34:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:19.971 15:34:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.971 15:34:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:19.971 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:09:19.971 [2024-07-24 15:34:41.486958] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:19.971 [2024-07-24 15:34:41.487772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60892 ] 00:09:20.230 [2024-07-24 15:34:41.659440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.490 [2024-07-24 15:34:41.840309] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:20.490 [2024-07-24 15:34:41.840767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.866 15:34:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:21.866 15:34:43 -- common/autotest_common.sh@852 -- # return 0 00:09:21.866 15:34:43 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:09:21.866 15:34:43 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:09:21.866 15:34:43 -- bdev/blockdev.sh@79 -- # local json 00:09:21.866 15:34:43 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:09:21.866 15:34:43 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:21.866 15:34:43 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:09:21.866 15:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:21.866 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:09:21.866 15:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:21.866 15:34:43 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:09:21.866 15:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:21.866 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:09:21.866 15:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:21.866 15:34:43 -- bdev/blockdev.sh@738 -- # cat 00:09:21.866 15:34:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:09:21.866 15:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:21.866 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:09:21.866 15:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:21.866 15:34:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:09:21.866 15:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:21.866 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:09:22.132 15:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:22.132 15:34:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:22.132 15:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:22.132 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:09:22.132 15:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:22.132 15:34:43 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:09:22.132 15:34:43 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:09:22.132 15:34:43 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:09:22.132 15:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:22.132 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:09:22.132 15:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:22.132 15:34:43 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:09:22.132 15:34:43 -- bdev/blockdev.sh@747 -- # jq -r .name 00:09:22.132 15:34:43 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8f0e24be-560c-4acb-a34a-b042d53d8740"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8f0e24be-560c-4acb-a34a-b042d53d8740",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "3e6970c9-cdb4-460c-9fcb-2b8a7b281907"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3e6970c9-cdb4-460c-9fcb-2b8a7b281907",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "8aecba9b-1730-4ed6-9558-7ce5bded23e5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8aecba9b-1730-4ed6-9558-7ce5bded23e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "044bb075-c1a8-48f9-9ff6-de66a0ff407e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "044bb075-c1a8-48f9-9ff6-de66a0ff407e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "d8ccfb49-aa59-44e5-8bf9-1f2fec7f8f60"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d8ccfb49-aa59-44e5-8bf9-1f2fec7f8f60",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "bf013c83-741d-4ab5-862d-65e3bc00f1dd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "bf013c83-741d-4ab5-862d-65e3bc00f1dd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:22.132 15:34:43 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:09:22.132 15:34:43 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:09:22.132 15:34:43 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:09:22.132 15:34:43 -- bdev/blockdev.sh@752 -- # killprocess 60892 00:09:22.132 15:34:43 -- common/autotest_common.sh@926 -- # '[' -z 60892 ']' 00:09:22.132 15:34:43 -- common/autotest_common.sh@930 -- # kill -0 60892 00:09:22.132 15:34:43 -- common/autotest_common.sh@931 -- # uname 00:09:22.132 15:34:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:22.132 15:34:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60892 00:09:22.132 killing process with pid 60892 00:09:22.133 15:34:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:22.133 15:34:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:22.133 15:34:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60892' 00:09:22.133 15:34:43 -- common/autotest_common.sh@945 -- # kill 60892 00:09:22.133 15:34:43 -- common/autotest_common.sh@950 -- # wait 60892 00:09:24.668 15:34:45 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:24.668 15:34:45 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:24.668 15:34:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:24.668 15:34:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:24.668 15:34:45 -- common/autotest_common.sh@10 -- # set +x 00:09:24.668 ************************************ 00:09:24.668 START TEST bdev_hello_world 00:09:24.668 ************************************ 00:09:24.668 15:34:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:24.668 [2024-07-24 15:34:45.738963] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:24.668 [2024-07-24 15:34:45.739145] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60995 ] 00:09:24.668 [2024-07-24 15:34:45.907070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.668 [2024-07-24 15:34:46.083593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.232 [2024-07-24 15:34:46.677980] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:25.233 [2024-07-24 15:34:46.678040] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:25.233 [2024-07-24 15:34:46.678096] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:25.233 [2024-07-24 15:34:46.681036] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:25.233 [2024-07-24 15:34:46.681663] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:25.233 [2024-07-24 15:34:46.681704] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:25.233 [2024-07-24 15:34:46.681878] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:25.233 00:09:25.233 [2024-07-24 15:34:46.681910] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:26.165 ************************************ 00:09:26.165 END TEST bdev_hello_world 00:09:26.165 ************************************ 00:09:26.165 00:09:26.165 real 0m1.993s 00:09:26.165 user 0m1.674s 00:09:26.166 sys 0m0.210s 00:09:26.166 15:34:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.166 15:34:47 -- common/autotest_common.sh@10 -- # set +x 00:09:26.166 15:34:47 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:09:26.166 15:34:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:26.166 15:34:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:26.166 15:34:47 -- common/autotest_common.sh@10 -- # set +x 00:09:26.166 ************************************ 00:09:26.166 START TEST bdev_bounds 00:09:26.166 ************************************ 00:09:26.166 15:34:47 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:09:26.166 Process bdevio pid: 61037 00:09:26.166 15:34:47 -- bdev/blockdev.sh@288 -- # bdevio_pid=61037 00:09:26.166 15:34:47 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:26.166 15:34:47 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 61037' 00:09:26.166 15:34:47 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:26.166 15:34:47 -- bdev/blockdev.sh@291 -- # waitforlisten 61037 00:09:26.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.166 15:34:47 -- common/autotest_common.sh@819 -- # '[' -z 61037 ']' 00:09:26.166 15:34:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.166 15:34:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:26.166 15:34:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.166 15:34:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:26.166 15:34:47 -- common/autotest_common.sh@10 -- # set +x 00:09:26.424 [2024-07-24 15:34:47.788511] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:26.424 [2024-07-24 15:34:47.788910] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61037 ] 00:09:26.424 [2024-07-24 15:34:47.960799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:26.682 [2024-07-24 15:34:48.142842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.682 [2024-07-24 15:34:48.142941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.682 [2024-07-24 15:34:48.142947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.056 15:34:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:28.056 15:34:49 -- common/autotest_common.sh@852 -- # return 0 00:09:28.056 15:34:49 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:28.056 I/O targets: 00:09:28.056 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:28.056 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:28.056 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:28.056 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:28.056 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:28.056 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:28.056 00:09:28.056 00:09:28.056 CUnit - A unit testing framework for C - Version 2.1-3 00:09:28.056 http://cunit.sourceforge.net/ 00:09:28.056 00:09:28.056 00:09:28.056 Suite: bdevio tests on: Nvme3n1 00:09:28.056 Test: blockdev write read block ...passed 00:09:28.056 Test: blockdev write zeroes read block ...passed 00:09:28.056 Test: blockdev write zeroes read no split ...passed 00:09:28.056 Test: blockdev write zeroes read split ...passed 00:09:28.056 Test: blockdev write zeroes read split partial ...passed 00:09:28.056 Test: blockdev reset ...[2024-07-24 15:34:49.577397] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:28.056 [2024-07-24 15:34:49.581181] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:28.056 passed 00:09:28.056 Test: blockdev write read 8 blocks ...passed 00:09:28.056 Test: blockdev write read size > 128k ...passed 00:09:28.056 Test: blockdev write read invalid size ...passed 00:09:28.056 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:28.056 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:28.056 Test: blockdev write read max offset ...passed 00:09:28.056 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:28.056 Test: blockdev writev readv 8 blocks ...passed 00:09:28.056 Test: blockdev writev readv 30 x 1block ...passed 00:09:28.056 Test: blockdev writev readv block ...passed 00:09:28.056 Test: blockdev writev readv size > 128k ...passed 00:09:28.056 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:28.056 Test: blockdev comparev and writev ...[2024-07-24 15:34:49.589792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27320e000 len:0x1000 00:09:28.056 [2024-07-24 15:34:49.589858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:28.056 passed 00:09:28.056 Test: blockdev nvme passthru rw ...passed 00:09:28.056 Test: blockdev nvme passthru vendor specific ...passed 00:09:28.056 Test: blockdev nvme admin passthru ...[2024-07-24 15:34:49.590687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:28.056 [2024-07-24 15:34:49.590736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:28.056 passed 00:09:28.056 Test: blockdev copy ...passed 00:09:28.056 Suite: bdevio tests on: Nvme2n3 00:09:28.056 Test: blockdev write read block ...passed 00:09:28.056 Test: blockdev write zeroes read block ...passed 00:09:28.056 Test: blockdev write zeroes read no split ...passed 00:09:28.056 Test: blockdev write zeroes read split ...passed 00:09:28.056 Test: blockdev write zeroes read split partial ...passed 00:09:28.056 Test: blockdev reset ...[2024-07-24 15:34:49.652267] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:28.315 [2024-07-24 15:34:49.656274] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:28.315 passed 00:09:28.315 Test: blockdev write read 8 blocks ...passed 00:09:28.315 Test: blockdev write read size > 128k ...passed 00:09:28.315 Test: blockdev write read invalid size ...passed 00:09:28.315 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:28.315 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:28.315 Test: blockdev write read max offset ...passed 00:09:28.315 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:28.315 Test: blockdev writev readv 8 blocks ...passed 00:09:28.315 Test: blockdev writev readv 30 x 1block ...passed 00:09:28.315 Test: blockdev writev readv block ...passed 00:09:28.315 Test: blockdev writev readv size > 128k ...passed 00:09:28.315 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:28.315 Test: blockdev comparev and writev ...[2024-07-24 15:34:49.665205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27320a000 len:0x1000 00:09:28.315 [2024-07-24 15:34:49.665261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:28.315 passed 00:09:28.315 Test: blockdev nvme passthru rw ...passed 00:09:28.315 Test: blockdev nvme passthru vendor specific ...passed 00:09:28.315 Test: blockdev nvme admin passthru ...[2024-07-24 15:34:49.665982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:28.315 [2024-07-24 15:34:49.666027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:28.315 passed 00:09:28.315 Test: blockdev copy ...passed 00:09:28.315 Suite: bdevio tests on: Nvme2n2 00:09:28.315 Test: blockdev write read block ...passed 00:09:28.315 Test: blockdev write zeroes read block ...passed 00:09:28.315 Test: blockdev write zeroes read no split ...passed 00:09:28.315 Test: blockdev write zeroes read split ...passed 00:09:28.315 Test: blockdev write zeroes read split partial ...passed 00:09:28.315 Test: blockdev reset ...[2024-07-24 15:34:49.729410] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:28.315 [2024-07-24 15:34:49.733269] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:28.315 passed 00:09:28.315 Test: blockdev write read 8 blocks ...passed 00:09:28.315 Test: blockdev write read size > 128k ...passed 00:09:28.315 Test: blockdev write read invalid size ...passed 00:09:28.315 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:28.315 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:28.315 Test: blockdev write read max offset ...passed 00:09:28.315 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:28.315 Test: blockdev writev readv 8 blocks ...passed 00:09:28.315 Test: blockdev writev readv 30 x 1block ...passed 00:09:28.315 Test: blockdev writev readv block ...passed 00:09:28.315 Test: blockdev writev readv size > 128k ...passed 00:09:28.315 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:28.315 Test: blockdev comparev and writev ...[2024-07-24 15:34:49.742341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x267006000 len:0x1000 00:09:28.315 [2024-07-24 15:34:49.742397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:28.315 passed 00:09:28.315 Test: blockdev nvme passthru rw ...passed 00:09:28.315 Test: blockdev nvme passthru vendor specific ...passed 00:09:28.315 Test: blockdev nvme admin passthru ...[2024-07-24 15:34:49.743254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:28.315 [2024-07-24 15:34:49.743300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:28.315 passed 00:09:28.315 Test: blockdev copy ...passed 00:09:28.315 Suite: bdevio tests on: Nvme2n1 00:09:28.315 Test: blockdev write read block ...passed 00:09:28.315 Test: blockdev write zeroes read block ...passed 00:09:28.315 Test: blockdev write zeroes read no split ...passed 00:09:28.315 Test: blockdev write zeroes read split ...passed 00:09:28.315 Test: blockdev write zeroes read split partial ...passed 00:09:28.315 Test: blockdev reset ...[2024-07-24 15:34:49.820060] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:28.315 [2024-07-24 15:34:49.824125] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:28.315 passed 00:09:28.315 Test: blockdev write read 8 blocks ...passed 00:09:28.315 Test: blockdev write read size > 128k ...passed 00:09:28.315 Test: blockdev write read invalid size ...passed 00:09:28.315 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:28.315 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:28.315 Test: blockdev write read max offset ...passed 00:09:28.315 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:28.315 Test: blockdev writev readv 8 blocks ...passed 00:09:28.315 Test: blockdev writev readv 30 x 1block ...passed 00:09:28.315 Test: blockdev writev readv block ...passed 00:09:28.315 Test: blockdev writev readv size > 128k ...passed 00:09:28.315 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:28.315 Test: blockdev comparev and writev ...[2024-07-24 15:34:49.834386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x267001000 len:0x1000 00:09:28.315 [2024-07-24 15:34:49.834611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:28.315 passed 00:09:28.315 Test: blockdev nvme passthru rw ...passed 00:09:28.315 Test: blockdev nvme passthru vendor specific ...[2024-07-24 15:34:49.835944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:28.315 [2024-07-24 15:34:49.836192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:28.315 passed 00:09:28.315 Test: blockdev nvme admin passthru ...passed 00:09:28.315 Test: blockdev copy ...passed 00:09:28.315 Suite: bdevio tests on: Nvme1n1 00:09:28.315 Test: blockdev write read block ...passed 00:09:28.315 Test: blockdev write zeroes read block ...passed 00:09:28.315 Test: blockdev write zeroes read no split ...passed 00:09:28.315 Test: blockdev write zeroes read split ...passed 00:09:28.315 Test: blockdev write zeroes read split partial ...passed 00:09:28.315 Test: blockdev reset ...[2024-07-24 15:34:49.905188] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:28.315 [2024-07-24 15:34:49.908892] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:28.315 passed 00:09:28.315 Test: blockdev write read 8 blocks ...passed 00:09:28.315 Test: blockdev write read size > 128k ...passed 00:09:28.315 Test: blockdev write read invalid size ...passed 00:09:28.574 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:28.574 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:28.574 Test: blockdev write read max offset ...passed 00:09:28.574 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:28.574 Test: blockdev writev readv 8 blocks ...passed 00:09:28.574 Test: blockdev writev readv 30 x 1block ...passed 00:09:28.574 Test: blockdev writev readv block ...passed 00:09:28.574 Test: blockdev writev readv size > 128k ...passed 00:09:28.574 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:28.574 Test: blockdev comparev and writev ...[2024-07-24 15:34:49.917911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x276c06000 len:0x1000 00:09:28.574 [2024-07-24 15:34:49.917967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:28.574 passed 00:09:28.574 Test: blockdev nvme passthru rw ...passed 00:09:28.574 Test: blockdev nvme passthru vendor specific ...[2024-07-24 15:34:49.918812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:28.574 [2024-07-24 15:34:49.918856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:28.574 passed 00:09:28.574 Test: blockdev nvme admin passthru ...passed 00:09:28.574 Test: blockdev copy ...passed 00:09:28.574 Suite: bdevio tests on: Nvme0n1 00:09:28.574 Test: blockdev write read block ...passed 00:09:28.574 Test: blockdev write zeroes read block ...passed 00:09:28.574 Test: blockdev write zeroes read no split ...passed 00:09:28.574 Test: blockdev write zeroes read split ...passed 00:09:28.574 Test: blockdev write zeroes read split partial ...passed 00:09:28.574 Test: blockdev reset ...[2024-07-24 15:34:49.980705] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:28.574 [2024-07-24 15:34:49.984252] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:28.574 passed 00:09:28.574 Test: blockdev write read 8 blocks ...passed 00:09:28.574 Test: blockdev write read size > 128k ...passed 00:09:28.574 Test: blockdev write read invalid size ...passed 00:09:28.574 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:28.574 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:28.574 Test: blockdev write read max offset ...passed 00:09:28.574 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:28.574 Test: blockdev writev readv 8 blocks ...passed 00:09:28.574 Test: blockdev writev readv 30 x 1block ...passed 00:09:28.574 Test: blockdev writev readv block ...passed 00:09:28.574 Test: blockdev writev readv size > 128k ...passed 00:09:28.574 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:28.574 Test: blockdev comparev and writev ...passed 00:09:28.574 Test: blockdev nvme passthru rw ...[2024-07-24 15:34:49.991933] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:28.574 separate metadata which is not supported yet. 00:09:28.574 passed 00:09:28.574 Test: blockdev nvme passthru vendor specific ...passed 00:09:28.574 Test: blockdev nvme admin passthru ...[2024-07-24 15:34:49.992455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:28.574 [2024-07-24 15:34:49.992510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:28.574 passed 00:09:28.574 Test: blockdev copy ...passed 00:09:28.574 00:09:28.574 Run Summary: Type Total Ran Passed Failed Inactive 00:09:28.574 suites 6 6 n/a 0 0 00:09:28.574 tests 138 138 138 0 0 00:09:28.574 asserts 893 893 893 0 n/a 00:09:28.574 00:09:28.574 Elapsed time = 1.308 seconds 00:09:28.574 0 00:09:28.574 15:34:50 -- bdev/blockdev.sh@293 -- # killprocess 61037 00:09:28.574 15:34:50 -- common/autotest_common.sh@926 -- # '[' -z 61037 ']' 00:09:28.574 15:34:50 -- common/autotest_common.sh@930 -- # kill -0 61037 00:09:28.574 15:34:50 -- common/autotest_common.sh@931 -- # uname 00:09:28.574 15:34:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:28.574 15:34:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61037 00:09:28.574 15:34:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:28.574 15:34:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:28.574 15:34:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61037' 00:09:28.574 killing process with pid 61037 00:09:28.574 15:34:50 -- common/autotest_common.sh@945 -- # kill 61037 00:09:28.574 15:34:50 -- common/autotest_common.sh@950 -- # wait 61037 00:09:29.508 15:34:50 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:09:29.508 00:09:29.508 real 0m3.262s 00:09:29.508 user 0m8.593s 00:09:29.508 sys 0m0.368s 00:09:29.508 15:34:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.508 15:34:50 -- common/autotest_common.sh@10 -- # set +x 00:09:29.508 ************************************ 00:09:29.508 END TEST bdev_bounds 00:09:29.508 ************************************ 00:09:29.508 15:34:51 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:29.508 15:34:51 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:29.508 15:34:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:29.508 15:34:51 -- common/autotest_common.sh@10 -- # set +x 00:09:29.508 ************************************ 00:09:29.508 START TEST bdev_nbd 00:09:29.508 ************************************ 00:09:29.508 15:34:51 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:29.508 15:34:51 -- bdev/blockdev.sh@298 -- # uname -s 00:09:29.508 15:34:51 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:09:29.508 15:34:51 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.508 15:34:51 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:29.508 15:34:51 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:29.508 15:34:51 -- bdev/blockdev.sh@302 -- # local bdev_all 00:09:29.508 15:34:51 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:09:29.508 15:34:51 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:09:29.508 15:34:51 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:29.508 15:34:51 -- bdev/blockdev.sh@309 -- # local nbd_all 00:09:29.508 15:34:51 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:09:29.508 15:34:51 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:29.508 15:34:51 -- bdev/blockdev.sh@312 -- # local nbd_list 00:09:29.508 15:34:51 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:29.508 15:34:51 -- bdev/blockdev.sh@313 -- # local bdev_list 00:09:29.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:29.508 15:34:51 -- bdev/blockdev.sh@316 -- # nbd_pid=61104 00:09:29.508 15:34:51 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:29.508 15:34:51 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:29.508 15:34:51 -- bdev/blockdev.sh@318 -- # waitforlisten 61104 /var/tmp/spdk-nbd.sock 00:09:29.508 15:34:51 -- common/autotest_common.sh@819 -- # '[' -z 61104 ']' 00:09:29.508 15:34:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:29.508 15:34:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:29.508 15:34:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:29.508 15:34:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:29.508 15:34:51 -- common/autotest_common.sh@10 -- # set +x 00:09:29.766 [2024-07-24 15:34:51.113661] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:29.766 [2024-07-24 15:34:51.113832] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.766 [2024-07-24 15:34:51.289743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.024 [2024-07-24 15:34:51.516626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.403 15:34:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:31.403 15:34:52 -- common/autotest_common.sh@852 -- # return 0 00:09:31.403 15:34:52 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:31.403 15:34:52 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.403 15:34:52 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:31.403 15:34:52 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:31.403 15:34:52 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:31.403 15:34:52 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.403 15:34:52 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:31.403 15:34:52 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:31.403 15:34:52 -- bdev/nbd_common.sh@24 -- # local i 00:09:31.403 15:34:52 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:31.403 15:34:52 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:31.403 15:34:52 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:31.403 15:34:52 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:31.403 15:34:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:31.403 15:34:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:31.403 15:34:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:31.403 15:34:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:31.403 15:34:52 -- common/autotest_common.sh@857 -- # local i 00:09:31.403 15:34:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:31.403 15:34:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:31.403 15:34:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:31.403 15:34:52 -- common/autotest_common.sh@861 -- # break 00:09:31.403 15:34:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:31.403 15:34:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:31.403 15:34:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.403 1+0 records in 00:09:31.403 1+0 records out 00:09:31.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401261 s, 10.2 MB/s 00:09:31.403 15:34:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.661 15:34:53 -- common/autotest_common.sh@874 -- # size=4096 00:09:31.661 15:34:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.661 15:34:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:31.661 15:34:53 -- common/autotest_common.sh@877 -- # return 0 00:09:31.661 15:34:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:31.661 15:34:53 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:31.661 15:34:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:31.661 15:34:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:31.661 15:34:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:31.661 15:34:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:31.661 15:34:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:31.661 15:34:53 -- common/autotest_common.sh@857 -- # local i 00:09:31.661 15:34:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:31.661 15:34:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:31.661 15:34:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:31.661 15:34:53 -- common/autotest_common.sh@861 -- # break 00:09:31.661 15:34:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:31.661 15:34:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:31.661 15:34:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.661 1+0 records in 00:09:31.661 1+0 records out 00:09:31.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461955 s, 8.9 MB/s 00:09:31.922 15:34:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.922 15:34:53 -- common/autotest_common.sh@874 -- # size=4096 00:09:31.922 15:34:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.922 15:34:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:31.922 15:34:53 -- common/autotest_common.sh@877 -- # return 0 00:09:31.922 15:34:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:31.922 15:34:53 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:31.922 15:34:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:31.922 15:34:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:31.922 15:34:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:31.922 15:34:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:31.922 15:34:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:09:31.922 15:34:53 -- common/autotest_common.sh@857 -- # local i 00:09:31.922 15:34:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:31.922 15:34:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:31.922 15:34:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:09:31.922 15:34:53 -- common/autotest_common.sh@861 -- # break 00:09:31.922 15:34:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:31.922 15:34:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:31.922 15:34:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.922 1+0 records in 00:09:31.922 1+0 records out 00:09:31.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569655 s, 7.2 MB/s 00:09:31.922 15:34:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.922 15:34:53 -- common/autotest_common.sh@874 -- # size=4096 00:09:31.922 15:34:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.922 15:34:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:31.922 15:34:53 -- common/autotest_common.sh@877 -- # return 0 00:09:31.922 15:34:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:31.922 15:34:53 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:31.922 15:34:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:32.485 15:34:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:32.485 15:34:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:32.485 15:34:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:32.485 15:34:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:09:32.485 15:34:53 -- common/autotest_common.sh@857 -- # local i 00:09:32.485 15:34:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:32.485 15:34:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:32.485 15:34:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:09:32.486 15:34:53 -- common/autotest_common.sh@861 -- # break 00:09:32.486 15:34:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:32.486 15:34:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:32.486 15:34:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:32.486 1+0 records in 00:09:32.486 1+0 records out 00:09:32.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681422 s, 6.0 MB/s 00:09:32.486 15:34:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.486 15:34:53 -- common/autotest_common.sh@874 -- # size=4096 00:09:32.486 15:34:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.486 15:34:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:32.486 15:34:53 -- common/autotest_common.sh@877 -- # return 0 00:09:32.486 15:34:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:32.486 15:34:53 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:32.486 15:34:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:32.486 15:34:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:32.744 15:34:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:32.744 15:34:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:32.744 15:34:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:09:32.744 15:34:54 -- common/autotest_common.sh@857 -- # local i 00:09:32.744 15:34:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:32.744 15:34:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:32.744 15:34:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:09:32.744 15:34:54 -- common/autotest_common.sh@861 -- # break 00:09:32.744 15:34:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:32.744 15:34:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:32.744 15:34:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:32.744 1+0 records in 00:09:32.744 1+0 records out 00:09:32.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608382 s, 6.7 MB/s 00:09:32.744 15:34:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.744 15:34:54 -- common/autotest_common.sh@874 -- # size=4096 00:09:32.744 15:34:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.744 15:34:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:32.744 15:34:54 -- common/autotest_common.sh@877 -- # return 0 00:09:32.744 15:34:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:32.744 15:34:54 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:32.744 15:34:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:33.002 15:34:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:33.002 15:34:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:33.002 15:34:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:33.002 15:34:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:09:33.002 15:34:54 -- common/autotest_common.sh@857 -- # local i 00:09:33.002 15:34:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:33.002 15:34:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:33.002 15:34:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:09:33.002 15:34:54 -- common/autotest_common.sh@861 -- # break 00:09:33.002 15:34:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:33.002 15:34:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:33.002 15:34:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:33.002 1+0 records in 00:09:33.002 1+0 records out 00:09:33.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000839929 s, 4.9 MB/s 00:09:33.002 15:34:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.002 15:34:54 -- common/autotest_common.sh@874 -- # size=4096 00:09:33.002 15:34:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.002 15:34:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:33.002 15:34:54 -- common/autotest_common.sh@877 -- # return 0 00:09:33.002 15:34:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:33.002 15:34:54 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:33.002 15:34:54 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:33.260 15:34:54 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:33.260 { 00:09:33.260 "nbd_device": "/dev/nbd0", 00:09:33.260 "bdev_name": "Nvme0n1" 00:09:33.260 }, 00:09:33.260 { 00:09:33.260 "nbd_device": "/dev/nbd1", 00:09:33.260 "bdev_name": "Nvme1n1" 00:09:33.260 }, 00:09:33.260 { 00:09:33.260 "nbd_device": "/dev/nbd2", 00:09:33.260 "bdev_name": "Nvme2n1" 00:09:33.260 }, 00:09:33.260 { 00:09:33.260 "nbd_device": "/dev/nbd3", 00:09:33.260 "bdev_name": "Nvme2n2" 00:09:33.260 }, 00:09:33.260 { 00:09:33.260 "nbd_device": "/dev/nbd4", 00:09:33.260 "bdev_name": "Nvme2n3" 00:09:33.260 }, 00:09:33.260 { 00:09:33.260 "nbd_device": "/dev/nbd5", 00:09:33.260 "bdev_name": "Nvme3n1" 00:09:33.260 } 00:09:33.260 ]' 00:09:33.260 15:34:54 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:33.260 15:34:54 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:33.260 15:34:54 -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:33.260 { 00:09:33.260 "nbd_device": "/dev/nbd0", 00:09:33.260 "bdev_name": "Nvme0n1" 00:09:33.260 }, 00:09:33.260 { 00:09:33.260 "nbd_device": "/dev/nbd1", 00:09:33.260 "bdev_name": "Nvme1n1" 00:09:33.260 }, 00:09:33.260 { 00:09:33.260 "nbd_device": "/dev/nbd2", 00:09:33.260 "bdev_name": "Nvme2n1" 00:09:33.260 }, 00:09:33.260 { 00:09:33.260 "nbd_device": "/dev/nbd3", 00:09:33.260 "bdev_name": "Nvme2n2" 00:09:33.260 }, 00:09:33.260 { 00:09:33.260 "nbd_device": "/dev/nbd4", 00:09:33.260 "bdev_name": "Nvme2n3" 00:09:33.260 }, 00:09:33.260 { 00:09:33.260 "nbd_device": "/dev/nbd5", 00:09:33.260 "bdev_name": "Nvme3n1" 00:09:33.260 } 00:09:33.260 ]' 00:09:33.260 15:34:54 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:09:33.260 15:34:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.260 15:34:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:09:33.260 15:34:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:33.260 15:34:54 -- bdev/nbd_common.sh@51 -- # local i 00:09:33.260 15:34:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.260 15:34:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:33.518 15:34:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:33.518 15:34:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:33.518 15:34:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:33.518 15:34:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.518 15:34:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.518 15:34:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:33.518 15:34:54 -- bdev/nbd_common.sh@41 -- # break 00:09:33.518 15:34:54 -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.518 15:34:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.518 15:34:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:33.776 15:34:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:33.776 15:34:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:33.776 15:34:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:33.776 15:34:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.776 15:34:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.776 15:34:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:33.776 15:34:55 -- bdev/nbd_common.sh@41 -- # break 00:09:33.776 15:34:55 -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.776 15:34:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.776 15:34:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:34.034 15:34:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:34.034 15:34:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:34.034 15:34:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:34.034 15:34:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.034 15:34:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.034 15:34:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:34.034 15:34:55 -- bdev/nbd_common.sh@41 -- # break 00:09:34.034 15:34:55 -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.034 15:34:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.034 15:34:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@41 -- # break 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@41 -- # break 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.292 15:34:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:34.550 15:34:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:34.550 15:34:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:34.550 15:34:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:34.550 15:34:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.550 15:34:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.550 15:34:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:34.550 15:34:56 -- bdev/nbd_common.sh@41 -- # break 00:09:34.550 15:34:56 -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.550 15:34:56 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:34.550 15:34:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.550 15:34:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@65 -- # true 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@65 -- # count=0 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@122 -- # count=0 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@127 -- # return 0 00:09:34.808 15:34:56 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@12 -- # local i 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:34.808 15:34:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:35.066 /dev/nbd0 00:09:35.066 15:34:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:35.066 15:34:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:35.066 15:34:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:35.066 15:34:56 -- common/autotest_common.sh@857 -- # local i 00:09:35.066 15:34:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:35.066 15:34:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:35.066 15:34:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:35.066 15:34:56 -- common/autotest_common.sh@861 -- # break 00:09:35.066 15:34:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:35.066 15:34:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:35.066 15:34:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.066 1+0 records in 00:09:35.066 1+0 records out 00:09:35.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000664844 s, 6.2 MB/s 00:09:35.066 15:34:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.066 15:34:56 -- common/autotest_common.sh@874 -- # size=4096 00:09:35.066 15:34:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.066 15:34:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:35.066 15:34:56 -- common/autotest_common.sh@877 -- # return 0 00:09:35.066 15:34:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:35.066 15:34:56 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:35.066 15:34:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:09:35.324 /dev/nbd1 00:09:35.324 15:34:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:35.324 15:34:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:35.324 15:34:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:35.324 15:34:56 -- common/autotest_common.sh@857 -- # local i 00:09:35.324 15:34:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:35.324 15:34:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:35.324 15:34:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:35.324 15:34:56 -- common/autotest_common.sh@861 -- # break 00:09:35.324 15:34:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:35.324 15:34:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:35.324 15:34:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.324 1+0 records in 00:09:35.324 1+0 records out 00:09:35.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514449 s, 8.0 MB/s 00:09:35.324 15:34:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.324 15:34:56 -- common/autotest_common.sh@874 -- # size=4096 00:09:35.324 15:34:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.324 15:34:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:35.324 15:34:56 -- common/autotest_common.sh@877 -- # return 0 00:09:35.324 15:34:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:35.324 15:34:56 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:35.324 15:34:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:35.582 /dev/nbd10 00:09:35.582 15:34:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:35.582 15:34:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:35.582 15:34:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:09:35.582 15:34:57 -- common/autotest_common.sh@857 -- # local i 00:09:35.582 15:34:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:35.582 15:34:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:35.582 15:34:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:09:35.582 15:34:57 -- common/autotest_common.sh@861 -- # break 00:09:35.582 15:34:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:35.582 15:34:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:35.582 15:34:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.582 1+0 records in 00:09:35.582 1+0 records out 00:09:35.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000622348 s, 6.6 MB/s 00:09:35.582 15:34:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.582 15:34:57 -- common/autotest_common.sh@874 -- # size=4096 00:09:35.582 15:34:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.582 15:34:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:35.582 15:34:57 -- common/autotest_common.sh@877 -- # return 0 00:09:35.582 15:34:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:35.582 15:34:57 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:35.582 15:34:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:35.839 /dev/nbd11 00:09:35.840 15:34:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:36.098 15:34:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:36.098 15:34:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:09:36.098 15:34:57 -- common/autotest_common.sh@857 -- # local i 00:09:36.098 15:34:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:36.098 15:34:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:36.098 15:34:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:09:36.098 15:34:57 -- common/autotest_common.sh@861 -- # break 00:09:36.098 15:34:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:36.098 15:34:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:36.098 15:34:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:36.098 1+0 records in 00:09:36.098 1+0 records out 00:09:36.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533261 s, 7.7 MB/s 00:09:36.098 15:34:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.098 15:34:57 -- common/autotest_common.sh@874 -- # size=4096 00:09:36.098 15:34:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.098 15:34:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:36.098 15:34:57 -- common/autotest_common.sh@877 -- # return 0 00:09:36.098 15:34:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:36.098 15:34:57 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:36.098 15:34:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:36.098 /dev/nbd12 00:09:36.098 15:34:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:36.098 15:34:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:36.098 15:34:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:09:36.098 15:34:57 -- common/autotest_common.sh@857 -- # local i 00:09:36.098 15:34:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:36.098 15:34:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:36.098 15:34:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:09:36.098 15:34:57 -- common/autotest_common.sh@861 -- # break 00:09:36.098 15:34:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:36.098 15:34:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:36.098 15:34:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:36.098 1+0 records in 00:09:36.098 1+0 records out 00:09:36.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000917817 s, 4.5 MB/s 00:09:36.098 15:34:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.359 15:34:57 -- common/autotest_common.sh@874 -- # size=4096 00:09:36.359 15:34:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.359 15:34:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:36.359 15:34:57 -- common/autotest_common.sh@877 -- # return 0 00:09:36.359 15:34:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:36.359 15:34:57 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:36.359 15:34:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:36.359 /dev/nbd13 00:09:36.359 15:34:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:36.359 15:34:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:36.359 15:34:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:09:36.359 15:34:57 -- common/autotest_common.sh@857 -- # local i 00:09:36.359 15:34:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:36.359 15:34:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:36.359 15:34:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:09:36.359 15:34:57 -- common/autotest_common.sh@861 -- # break 00:09:36.359 15:34:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:36.359 15:34:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:36.359 15:34:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:36.359 1+0 records in 00:09:36.359 1+0 records out 00:09:36.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000819192 s, 5.0 MB/s 00:09:36.359 15:34:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.359 15:34:57 -- common/autotest_common.sh@874 -- # size=4096 00:09:36.359 15:34:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.359 15:34:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:36.359 15:34:57 -- common/autotest_common.sh@877 -- # return 0 00:09:36.359 15:34:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:36.359 15:34:57 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:36.359 15:34:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:36.359 15:34:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:36.359 15:34:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:36.633 15:34:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:36.633 { 00:09:36.633 "nbd_device": "/dev/nbd0", 00:09:36.633 "bdev_name": "Nvme0n1" 00:09:36.633 }, 00:09:36.633 { 00:09:36.633 "nbd_device": "/dev/nbd1", 00:09:36.633 "bdev_name": "Nvme1n1" 00:09:36.633 }, 00:09:36.633 { 00:09:36.633 "nbd_device": "/dev/nbd10", 00:09:36.633 "bdev_name": "Nvme2n1" 00:09:36.633 }, 00:09:36.633 { 00:09:36.633 "nbd_device": "/dev/nbd11", 00:09:36.633 "bdev_name": "Nvme2n2" 00:09:36.633 }, 00:09:36.633 { 00:09:36.633 "nbd_device": "/dev/nbd12", 00:09:36.633 "bdev_name": "Nvme2n3" 00:09:36.633 }, 00:09:36.633 { 00:09:36.633 "nbd_device": "/dev/nbd13", 00:09:36.633 "bdev_name": "Nvme3n1" 00:09:36.633 } 00:09:36.633 ]' 00:09:36.633 15:34:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:36.633 { 00:09:36.633 "nbd_device": "/dev/nbd0", 00:09:36.633 "bdev_name": "Nvme0n1" 00:09:36.633 }, 00:09:36.633 { 00:09:36.633 "nbd_device": "/dev/nbd1", 00:09:36.633 "bdev_name": "Nvme1n1" 00:09:36.633 }, 00:09:36.633 { 00:09:36.633 "nbd_device": "/dev/nbd10", 00:09:36.633 "bdev_name": "Nvme2n1" 00:09:36.633 }, 00:09:36.633 { 00:09:36.633 "nbd_device": "/dev/nbd11", 00:09:36.633 "bdev_name": "Nvme2n2" 00:09:36.633 }, 00:09:36.633 { 00:09:36.633 "nbd_device": "/dev/nbd12", 00:09:36.633 "bdev_name": "Nvme2n3" 00:09:36.633 }, 00:09:36.633 { 00:09:36.633 "nbd_device": "/dev/nbd13", 00:09:36.633 "bdev_name": "Nvme3n1" 00:09:36.633 } 00:09:36.633 ]' 00:09:36.633 15:34:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:36.892 15:34:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:36.892 /dev/nbd1 00:09:36.892 /dev/nbd10 00:09:36.892 /dev/nbd11 00:09:36.892 /dev/nbd12 00:09:36.892 /dev/nbd13' 00:09:36.892 15:34:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:36.892 /dev/nbd1 00:09:36.892 /dev/nbd10 00:09:36.892 /dev/nbd11 00:09:36.892 /dev/nbd12 00:09:36.892 /dev/nbd13' 00:09:36.892 15:34:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:36.892 15:34:58 -- bdev/nbd_common.sh@65 -- # count=6 00:09:36.892 15:34:58 -- bdev/nbd_common.sh@66 -- # echo 6 00:09:36.892 15:34:58 -- bdev/nbd_common.sh@95 -- # count=6 00:09:36.892 15:34:58 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:36.892 15:34:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:36.892 15:34:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:36.892 15:34:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:36.892 15:34:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:36.892 15:34:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:36.892 15:34:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:36.892 15:34:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:36.892 256+0 records in 00:09:36.892 256+0 records out 00:09:36.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00740058 s, 142 MB/s 00:09:36.892 15:34:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:36.892 15:34:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:36.892 256+0 records in 00:09:36.892 256+0 records out 00:09:36.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136717 s, 7.7 MB/s 00:09:36.892 15:34:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:36.892 15:34:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:37.150 256+0 records in 00:09:37.150 256+0 records out 00:09:37.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152398 s, 6.9 MB/s 00:09:37.150 15:34:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:37.150 15:34:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:37.150 256+0 records in 00:09:37.150 256+0 records out 00:09:37.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15917 s, 6.6 MB/s 00:09:37.150 15:34:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:37.150 15:34:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:37.408 256+0 records in 00:09:37.408 256+0 records out 00:09:37.408 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153613 s, 6.8 MB/s 00:09:37.408 15:34:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:37.408 15:34:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:37.666 256+0 records in 00:09:37.666 256+0 records out 00:09:37.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148293 s, 7.1 MB/s 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:37.666 256+0 records in 00:09:37.666 256+0 records out 00:09:37.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154796 s, 6.8 MB/s 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@51 -- # local i 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:37.666 15:34:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:37.924 15:34:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:37.924 15:34:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:37.924 15:34:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:37.924 15:34:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:37.924 15:34:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:37.924 15:34:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:37.924 15:34:59 -- bdev/nbd_common.sh@41 -- # break 00:09:38.182 15:34:59 -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.182 15:34:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.182 15:34:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:38.440 15:34:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:38.440 15:34:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:38.440 15:34:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:38.440 15:34:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.440 15:34:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.440 15:34:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:38.440 15:34:59 -- bdev/nbd_common.sh@41 -- # break 00:09:38.440 15:34:59 -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.440 15:34:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.440 15:34:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:38.698 15:35:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:38.698 15:35:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:38.698 15:35:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:38.698 15:35:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.698 15:35:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.698 15:35:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:38.698 15:35:00 -- bdev/nbd_common.sh@41 -- # break 00:09:38.698 15:35:00 -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.698 15:35:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.698 15:35:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:38.956 15:35:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:38.956 15:35:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:38.956 15:35:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:38.956 15:35:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.956 15:35:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.956 15:35:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:38.956 15:35:00 -- bdev/nbd_common.sh@41 -- # break 00:09:38.956 15:35:00 -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.956 15:35:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.956 15:35:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:39.214 15:35:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:39.214 15:35:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:39.214 15:35:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:39.214 15:35:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:39.214 15:35:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:39.214 15:35:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:39.214 15:35:00 -- bdev/nbd_common.sh@41 -- # break 00:09:39.214 15:35:00 -- bdev/nbd_common.sh@45 -- # return 0 00:09:39.214 15:35:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:39.214 15:35:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:39.472 15:35:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:39.472 15:35:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:39.472 15:35:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:39.473 15:35:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:39.473 15:35:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:39.473 15:35:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:39.473 15:35:00 -- bdev/nbd_common.sh@41 -- # break 00:09:39.473 15:35:00 -- bdev/nbd_common.sh@45 -- # return 0 00:09:39.473 15:35:00 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:39.473 15:35:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.473 15:35:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:39.731 15:35:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:39.731 15:35:01 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:39.731 15:35:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:39.731 15:35:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:39.731 15:35:01 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:39.731 15:35:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:39.731 15:35:01 -- bdev/nbd_common.sh@65 -- # true 00:09:39.731 15:35:01 -- bdev/nbd_common.sh@65 -- # count=0 00:09:39.731 15:35:01 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:39.731 15:35:01 -- bdev/nbd_common.sh@104 -- # count=0 00:09:39.731 15:35:01 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:39.731 15:35:01 -- bdev/nbd_common.sh@109 -- # return 0 00:09:39.731 15:35:01 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:39.731 15:35:01 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.731 15:35:01 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:39.731 15:35:01 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:39.731 15:35:01 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:39.731 15:35:01 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:39.990 malloc_lvol_verify 00:09:39.990 15:35:01 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:40.247 625107e9-a86c-4cb2-883b-a78d192c582d 00:09:40.248 15:35:01 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:40.506 2cc5be49-a4fd-4072-8272-72332df59e6d 00:09:40.506 15:35:01 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:40.764 /dev/nbd0 00:09:40.764 15:35:02 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:40.764 mke2fs 1.46.5 (30-Dec-2021) 00:09:40.764 Discarding device blocks: 0/4096 done 00:09:40.764 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:40.764 00:09:40.764 Allocating group tables: 0/1 done 00:09:40.764 Writing inode tables: 0/1 done 00:09:40.764 Creating journal (1024 blocks): done 00:09:40.764 Writing superblocks and filesystem accounting information: 0/1 done 00:09:40.764 00:09:40.764 15:35:02 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:40.764 15:35:02 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:40.764 15:35:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.764 15:35:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:40.764 15:35:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:40.764 15:35:02 -- bdev/nbd_common.sh@51 -- # local i 00:09:40.764 15:35:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.764 15:35:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:41.023 15:35:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:41.023 15:35:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:41.023 15:35:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:41.023 15:35:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.023 15:35:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.023 15:35:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:41.023 15:35:02 -- bdev/nbd_common.sh@41 -- # break 00:09:41.023 15:35:02 -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.023 15:35:02 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:41.023 15:35:02 -- bdev/nbd_common.sh@147 -- # return 0 00:09:41.023 15:35:02 -- bdev/blockdev.sh@324 -- # killprocess 61104 00:09:41.023 15:35:02 -- common/autotest_common.sh@926 -- # '[' -z 61104 ']' 00:09:41.023 15:35:02 -- common/autotest_common.sh@930 -- # kill -0 61104 00:09:41.023 15:35:02 -- common/autotest_common.sh@931 -- # uname 00:09:41.023 15:35:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:41.023 15:35:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61104 00:09:41.023 15:35:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:41.023 15:35:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:41.023 killing process with pid 61104 00:09:41.023 15:35:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61104' 00:09:41.023 15:35:02 -- common/autotest_common.sh@945 -- # kill 61104 00:09:41.023 15:35:02 -- common/autotest_common.sh@950 -- # wait 61104 00:09:41.956 15:35:03 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:09:41.956 00:09:41.956 real 0m12.418s 00:09:41.956 user 0m17.637s 00:09:41.956 sys 0m3.652s 00:09:41.956 15:35:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.956 15:35:03 -- common/autotest_common.sh@10 -- # set +x 00:09:41.956 ************************************ 00:09:41.956 END TEST bdev_nbd 00:09:41.956 ************************************ 00:09:41.956 15:35:03 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:09:41.956 15:35:03 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:09:41.956 skipping fio tests on NVMe due to multi-ns failures. 00:09:41.956 15:35:03 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:41.956 15:35:03 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:41.956 15:35:03 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:41.956 15:35:03 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:09:41.956 15:35:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:41.956 15:35:03 -- common/autotest_common.sh@10 -- # set +x 00:09:41.956 ************************************ 00:09:41.956 START TEST bdev_verify 00:09:41.956 ************************************ 00:09:41.956 15:35:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:42.215 [2024-07-24 15:35:03.578306] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:42.215 [2024-07-24 15:35:03.578494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61504 ] 00:09:42.215 [2024-07-24 15:35:03.749060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:42.473 [2024-07-24 15:35:03.937197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.473 [2024-07-24 15:35:03.937211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.039 Running I/O for 5 seconds... 00:09:48.304 00:09:48.304 Latency(us) 00:09:48.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.304 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:48.304 Verification LBA range: start 0x0 length 0xbd0bd 00:09:48.304 Nvme0n1 : 5.04 2750.12 10.74 0.00 0.00 46437.40 5153.51 51713.86 00:09:48.304 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:48.304 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:48.304 Nvme0n1 : 5.04 2742.34 10.71 0.00 0.00 46521.36 8817.57 61961.31 00:09:48.304 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:48.304 Verification LBA range: start 0x0 length 0xa0000 00:09:48.304 Nvme1n1 : 5.05 2749.30 10.74 0.00 0.00 46416.27 5779.08 49330.73 00:09:48.304 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:48.304 Verification LBA range: start 0xa0000 length 0xa0000 00:09:48.304 Nvme1n1 : 5.05 2748.27 10.74 0.00 0.00 46389.35 3678.95 55288.55 00:09:48.304 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:48.304 Verification LBA range: start 0x0 length 0x80000 00:09:48.304 Nvme2n1 : 5.05 2748.50 10.74 0.00 0.00 46380.16 6196.13 47900.86 00:09:48.304 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:48.304 Verification LBA range: start 0x80000 length 0x80000 00:09:48.304 Nvme2n1 : 5.05 2747.07 10.73 0.00 0.00 46293.15 4974.78 48854.11 00:09:48.304 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:48.304 Verification LBA range: start 0x0 length 0x80000 00:09:48.304 Nvme2n2 : 5.05 2747.28 10.73 0.00 0.00 46337.29 7447.27 43372.92 00:09:48.304 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:48.304 Verification LBA range: start 0x80000 length 0x80000 00:09:48.304 Nvme2n2 : 5.05 2751.47 10.75 0.00 0.00 46167.98 2383.13 44802.79 00:09:48.304 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:48.304 Verification LBA range: start 0x0 length 0x80000 00:09:48.304 Nvme2n3 : 5.05 2746.02 10.73 0.00 0.00 46312.26 8996.31 44087.85 00:09:48.304 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:48.304 Verification LBA range: start 0x80000 length 0x80000 00:09:48.304 Nvme2n3 : 5.06 2750.75 10.75 0.00 0.00 46132.18 2859.75 44564.48 00:09:48.304 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:48.304 Verification LBA range: start 0x0 length 0x20000 00:09:48.304 Nvme3n1 : 5.05 2745.22 10.72 0.00 0.00 46277.63 9413.35 44564.48 00:09:48.304 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:48.304 Verification LBA range: start 0x20000 length 0x20000 00:09:48.305 Nvme3n1 : 5.06 2750.03 10.74 0.00 0.00 46103.17 3485.32 44087.85 00:09:48.305 =================================================================================================================== 00:09:48.305 Total : 32976.36 128.81 0.00 0.00 46313.84 2383.13 61961.31 00:09:58.266 00:09:58.266 real 0m15.168s 00:09:58.266 user 0m28.944s 00:09:58.266 sys 0m0.326s 00:09:58.266 15:35:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.266 15:35:18 -- common/autotest_common.sh@10 -- # set +x 00:09:58.266 ************************************ 00:09:58.266 END TEST bdev_verify 00:09:58.266 ************************************ 00:09:58.266 15:35:18 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:58.266 15:35:18 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:09:58.266 15:35:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:58.266 15:35:18 -- common/autotest_common.sh@10 -- # set +x 00:09:58.266 ************************************ 00:09:58.266 START TEST bdev_verify_big_io 00:09:58.266 ************************************ 00:09:58.266 15:35:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:58.266 [2024-07-24 15:35:18.803130] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:58.266 [2024-07-24 15:35:18.803350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61681 ] 00:09:58.266 [2024-07-24 15:35:18.973267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:58.266 [2024-07-24 15:35:19.152192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.266 [2024-07-24 15:35:19.152209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.525 Running I/O for 5 seconds... 00:10:03.795 00:10:03.795 Latency(us) 00:10:03.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.795 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:03.795 Verification LBA range: start 0x0 length 0xbd0b 00:10:03.795 Nvme0n1 : 5.38 266.04 16.63 0.00 0.00 470176.35 56003.49 629145.60 00:10:03.795 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:03.795 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:03.795 Nvme0n1 : 5.37 266.16 16.64 0.00 0.00 469996.69 56480.12 617706.59 00:10:03.795 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:03.795 Verification LBA range: start 0x0 length 0xa000 00:10:03.795 Nvme1n1 : 5.38 265.76 16.61 0.00 0.00 463650.74 60054.81 568137.54 00:10:03.795 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:03.795 Verification LBA range: start 0xa000 length 0xa000 00:10:03.795 Nvme1n1 : 5.37 266.05 16.63 0.00 0.00 464934.30 56718.43 564324.54 00:10:03.795 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:03.795 Verification LBA range: start 0x0 length 0x8000 00:10:03.795 Nvme2n1 : 5.40 273.65 17.10 0.00 0.00 448492.02 15490.33 518568.49 00:10:03.795 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:03.795 Verification LBA range: start 0x8000 length 0x8000 00:10:03.795 Nvme2n1 : 5.38 274.47 17.15 0.00 0.00 449465.92 5332.25 522381.50 00:10:03.795 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:03.795 Verification LBA range: start 0x0 length 0x8000 00:10:03.795 Nvme2n2 : 5.40 273.46 17.09 0.00 0.00 442552.74 17754.30 472812.45 00:10:03.795 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:03.795 Verification LBA range: start 0x8000 length 0x8000 00:10:03.795 Nvme2n2 : 5.39 274.36 17.15 0.00 0.00 443405.25 5779.08 467092.95 00:10:03.795 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:03.795 Verification LBA range: start 0x0 length 0x8000 00:10:03.795 Nvme2n3 : 5.41 281.48 17.59 0.00 0.00 426142.71 4349.21 425149.91 00:10:03.795 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:03.795 Verification LBA range: start 0x8000 length 0x8000 00:10:03.795 Nvme2n3 : 5.39 282.58 17.66 0.00 0.00 426305.96 3232.12 419430.40 00:10:03.795 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:03.795 Verification LBA range: start 0x0 length 0x2000 00:10:03.795 Nvme3n1 : 5.42 288.99 18.06 0.00 0.00 409739.91 4974.78 423243.40 00:10:03.795 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:03.795 Verification LBA range: start 0x2000 length 0x2000 00:10:03.795 Nvme3n1 : 5.40 282.48 17.65 0.00 0.00 420494.13 3678.95 425149.91 00:10:03.795 =================================================================================================================== 00:10:03.795 Total : 3295.47 205.97 0.00 0.00 444036.86 3232.12 629145.60 00:10:05.697 00:10:05.697 real 0m8.077s 00:10:05.697 user 0m14.860s 00:10:05.697 sys 0m0.254s 00:10:05.697 15:35:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.697 15:35:26 -- common/autotest_common.sh@10 -- # set +x 00:10:05.697 ************************************ 00:10:05.697 END TEST bdev_verify_big_io 00:10:05.697 ************************************ 00:10:05.697 15:35:26 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:05.697 15:35:26 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:05.697 15:35:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:05.697 15:35:26 -- common/autotest_common.sh@10 -- # set +x 00:10:05.697 ************************************ 00:10:05.697 START TEST bdev_write_zeroes 00:10:05.697 ************************************ 00:10:05.697 15:35:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:05.697 [2024-07-24 15:35:26.926618] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:05.697 [2024-07-24 15:35:26.927440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61785 ] 00:10:05.697 [2024-07-24 15:35:27.094669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.697 [2024-07-24 15:35:27.279310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.631 Running I/O for 1 seconds... 00:10:07.565 00:10:07.565 Latency(us) 00:10:07.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.565 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:07.565 Nvme0n1 : 1.02 8320.66 32.50 0.00 0.00 15324.03 11796.48 26452.71 00:10:07.565 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:07.565 Nvme1n1 : 1.02 8307.68 32.45 0.00 0.00 15322.88 12213.53 27286.81 00:10:07.565 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:07.565 Nvme2n1 : 1.02 8336.37 32.56 0.00 0.00 15253.26 9889.98 25499.46 00:10:07.565 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:07.565 Nvme2n2 : 1.02 8323.82 32.51 0.00 0.00 15204.44 10187.87 22282.24 00:10:07.565 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:07.565 Nvme2n3 : 1.02 8311.41 32.47 0.00 0.00 15203.48 10366.60 22520.55 00:10:07.565 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:07.565 Nvme3n1 : 1.03 8346.91 32.61 0.00 0.00 15083.53 7000.44 21090.68 00:10:07.565 =================================================================================================================== 00:10:07.565 Total : 49946.85 195.10 0.00 0.00 15231.52 7000.44 27286.81 00:10:08.942 00:10:08.942 real 0m3.312s 00:10:08.942 user 0m2.961s 00:10:08.942 sys 0m0.227s 00:10:08.942 15:35:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.942 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:10:08.942 ************************************ 00:10:08.942 END TEST bdev_write_zeroes 00:10:08.942 ************************************ 00:10:08.942 15:35:30 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:08.942 15:35:30 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:08.942 15:35:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:08.942 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:10:08.942 ************************************ 00:10:08.942 START TEST bdev_json_nonenclosed 00:10:08.942 ************************************ 00:10:08.942 15:35:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:08.943 [2024-07-24 15:35:30.290038] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:08.943 [2024-07-24 15:35:30.290251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61844 ] 00:10:08.943 [2024-07-24 15:35:30.462240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.201 [2024-07-24 15:35:30.682639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.201 [2024-07-24 15:35:30.682856] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:09.201 [2024-07-24 15:35:30.682886] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:09.769 00:10:09.769 real 0m0.863s 00:10:09.769 user 0m0.623s 00:10:09.769 sys 0m0.134s 00:10:09.769 15:35:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.769 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:10:09.769 ************************************ 00:10:09.769 END TEST bdev_json_nonenclosed 00:10:09.769 ************************************ 00:10:09.769 15:35:31 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:09.769 15:35:31 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:09.769 15:35:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:09.769 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:10:09.769 ************************************ 00:10:09.769 START TEST bdev_json_nonarray 00:10:09.769 ************************************ 00:10:09.769 15:35:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:09.769 [2024-07-24 15:35:31.199803] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:09.769 [2024-07-24 15:35:31.199977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61875 ] 00:10:10.027 [2024-07-24 15:35:31.370999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.027 [2024-07-24 15:35:31.551120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.027 [2024-07-24 15:35:31.551377] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:10.027 [2024-07-24 15:35:31.551412] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:10.595 00:10:10.595 real 0m0.814s 00:10:10.595 user 0m0.585s 00:10:10.595 sys 0m0.123s 00:10:10.595 15:35:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.595 ************************************ 00:10:10.595 END TEST bdev_json_nonarray 00:10:10.595 ************************************ 00:10:10.595 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:10:10.595 15:35:31 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:10:10.595 15:35:31 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:10:10.595 15:35:31 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:10:10.595 15:35:31 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:10:10.595 15:35:31 -- bdev/blockdev.sh@809 -- # cleanup 00:10:10.595 15:35:31 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:10.595 15:35:31 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:10.595 15:35:31 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:10:10.595 15:35:31 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:10:10.595 15:35:31 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:10:10.595 15:35:31 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:10:10.595 00:10:10.595 real 0m50.686s 00:10:10.595 user 1m20.651s 00:10:10.595 sys 0m6.127s 00:10:10.595 15:35:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.595 ************************************ 00:10:10.595 END TEST blockdev_nvme 00:10:10.595 ************************************ 00:10:10.595 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:10:10.595 15:35:32 -- spdk/autotest.sh@219 -- # uname -s 00:10:10.595 15:35:32 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:10:10.595 15:35:32 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:10.595 15:35:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:10.595 15:35:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:10.595 15:35:32 -- common/autotest_common.sh@10 -- # set +x 00:10:10.595 ************************************ 00:10:10.595 START TEST blockdev_nvme_gpt 00:10:10.595 ************************************ 00:10:10.595 15:35:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:10.595 * Looking for test storage... 00:10:10.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:10.595 15:35:32 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:10.595 15:35:32 -- bdev/nbd_common.sh@6 -- # set -e 00:10:10.595 15:35:32 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:10.595 15:35:32 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:10.595 15:35:32 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:10.595 15:35:32 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:10.595 15:35:32 -- bdev/blockdev.sh@18 -- # : 00:10:10.595 15:35:32 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:10:10.595 15:35:32 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:10:10.595 15:35:32 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:10:10.595 15:35:32 -- bdev/blockdev.sh@672 -- # uname -s 00:10:10.595 15:35:32 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:10:10.595 15:35:32 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:10:10.595 15:35:32 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:10:10.595 15:35:32 -- bdev/blockdev.sh@681 -- # crypto_device= 00:10:10.595 15:35:32 -- bdev/blockdev.sh@682 -- # dek= 00:10:10.595 15:35:32 -- bdev/blockdev.sh@683 -- # env_ctx= 00:10:10.595 15:35:32 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:10:10.595 15:35:32 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:10:10.595 15:35:32 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:10:10.595 15:35:32 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:10:10.595 15:35:32 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:10:10.595 15:35:32 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61950 00:10:10.595 15:35:32 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:10.595 15:35:32 -- bdev/blockdev.sh@47 -- # waitforlisten 61950 00:10:10.595 15:35:32 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:10.595 15:35:32 -- common/autotest_common.sh@819 -- # '[' -z 61950 ']' 00:10:10.595 15:35:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.595 15:35:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:10.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.595 15:35:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.595 15:35:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:10.595 15:35:32 -- common/autotest_common.sh@10 -- # set +x 00:10:10.854 [2024-07-24 15:35:32.218371] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:10.854 [2024-07-24 15:35:32.218541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61950 ] 00:10:10.854 [2024-07-24 15:35:32.384191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.112 [2024-07-24 15:35:32.562628] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:11.112 [2024-07-24 15:35:32.562854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.601 15:35:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:12.601 15:35:33 -- common/autotest_common.sh@852 -- # return 0 00:10:12.601 15:35:33 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:10:12.601 15:35:33 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:10:12.601 15:35:33 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:12.858 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:12.858 Waiting for block devices as requested 00:10:12.858 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:13.116 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:13.116 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:13.116 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.388 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:10:18.388 15:35:39 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:10:18.388 15:35:39 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:10:18.388 15:35:39 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:10:18.388 15:35:39 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:10:18.388 15:35:39 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:18.388 15:35:39 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:10:18.388 15:35:39 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:10:18.388 15:35:39 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:10:18.388 15:35:39 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:18.388 15:35:39 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:18.388 15:35:39 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:10:18.388 15:35:39 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:10:18.388 15:35:39 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:18.388 15:35:39 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:18.388 15:35:39 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:18.388 15:35:39 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:10:18.388 15:35:39 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:10:18.388 15:35:39 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:18.388 15:35:39 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:18.388 15:35:39 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:18.388 15:35:39 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:10:18.388 15:35:39 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:10:18.388 15:35:39 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:10:18.388 15:35:39 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:18.388 15:35:39 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:18.388 15:35:39 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:10:18.388 15:35:39 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:10:18.388 15:35:39 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:10:18.388 15:35:39 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:18.388 15:35:39 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:18.388 15:35:39 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:10:18.388 15:35:39 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:10:18.388 15:35:39 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:18.388 15:35:39 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:18.388 15:35:39 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:18.388 15:35:39 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:10:18.388 15:35:39 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:10:18.388 15:35:39 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:10:18.388 15:35:39 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:18.388 15:35:39 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:10:18.388 15:35:39 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:10:18.388 15:35:39 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:10:18.388 15:35:39 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:10:18.388 15:35:39 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:10:18.388 15:35:39 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:10:18.388 15:35:39 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:10:18.388 15:35:39 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:10:18.388 BYT; 00:10:18.388 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:10:18.388 15:35:39 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:10:18.388 BYT; 00:10:18.388 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:10:18.388 15:35:39 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:10:18.388 15:35:39 -- bdev/blockdev.sh@114 -- # break 00:10:18.388 15:35:39 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:10:18.388 15:35:39 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:10:18.388 15:35:39 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:18.388 15:35:39 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:10:18.388 15:35:39 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:10:18.388 15:35:39 -- scripts/common.sh@410 -- # local spdk_guid 00:10:18.388 15:35:39 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:18.388 15:35:39 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:18.388 15:35:39 -- scripts/common.sh@415 -- # IFS='()' 00:10:18.388 15:35:39 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:10:18.388 15:35:39 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:18.388 15:35:39 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:10:18.388 15:35:39 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:18.388 15:35:39 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:18.388 15:35:39 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:18.388 15:35:39 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:10:18.388 15:35:39 -- scripts/common.sh@422 -- # local spdk_guid 00:10:18.388 15:35:39 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:18.388 15:35:39 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:18.388 15:35:39 -- scripts/common.sh@427 -- # IFS='()' 00:10:18.388 15:35:39 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:10:18.388 15:35:39 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:18.388 15:35:39 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:10:18.388 15:35:39 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:18.388 15:35:39 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:18.388 15:35:39 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:18.388 15:35:39 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:10:19.323 The operation has completed successfully. 00:10:19.323 15:35:40 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:10:20.258 The operation has completed successfully. 00:10:20.258 15:35:41 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:21.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:21.450 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:10:21.450 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:10:21.450 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:10:21.450 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:10:21.709 15:35:43 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:10:21.709 15:35:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:21.709 15:35:43 -- common/autotest_common.sh@10 -- # set +x 00:10:21.709 [] 00:10:21.709 15:35:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:21.709 15:35:43 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:10:21.709 15:35:43 -- bdev/blockdev.sh@79 -- # local json 00:10:21.709 15:35:43 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:10:21.709 15:35:43 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:21.709 15:35:43 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:10:21.709 15:35:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:21.709 15:35:43 -- common/autotest_common.sh@10 -- # set +x 00:10:21.979 15:35:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:21.979 15:35:43 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:10:21.979 15:35:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:21.979 15:35:43 -- common/autotest_common.sh@10 -- # set +x 00:10:21.979 15:35:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:21.979 15:35:43 -- bdev/blockdev.sh@738 -- # cat 00:10:21.979 15:35:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:10:21.979 15:35:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:21.979 15:35:43 -- common/autotest_common.sh@10 -- # set +x 00:10:21.979 15:35:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:21.979 15:35:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:10:21.979 15:35:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:21.979 15:35:43 -- common/autotest_common.sh@10 -- # set +x 00:10:21.979 15:35:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:21.979 15:35:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:21.979 15:35:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:21.979 15:35:43 -- common/autotest_common.sh@10 -- # set +x 00:10:21.979 15:35:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:21.979 15:35:43 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:10:21.979 15:35:43 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:10:21.979 15:35:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:21.979 15:35:43 -- common/autotest_common.sh@10 -- # set +x 00:10:21.979 15:35:43 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:10:21.979 15:35:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:21.979 15:35:43 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:10:22.236 15:35:43 -- bdev/blockdev.sh@747 -- # jq -r .name 00:10:22.237 15:35:43 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "a6dc1920-dbdc-4683-8df6-ceb0729eaa48"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a6dc1920-dbdc-4683-8df6-ceb0729eaa48",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f744546b-b122-4dd9-bd4e-d4767874849e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f744546b-b122-4dd9-bd4e-d4767874849e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ff767463-ffed-49ac-ac60-4820ab7f1bbb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ff767463-ffed-49ac-ac60-4820ab7f1bbb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "4b054ab4-fa8b-48b7-9adf-f2f0083c4dc6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4b054ab4-fa8b-48b7-9adf-f2f0083c4dc6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d85059e4-a6e1-476e-8b25-26baa248ac49"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d85059e4-a6e1-476e-8b25-26baa248ac49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:22.237 15:35:43 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:10:22.237 15:35:43 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:10:22.237 15:35:43 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:10:22.237 15:35:43 -- bdev/blockdev.sh@752 -- # killprocess 61950 00:10:22.237 15:35:43 -- common/autotest_common.sh@926 -- # '[' -z 61950 ']' 00:10:22.237 15:35:43 -- common/autotest_common.sh@930 -- # kill -0 61950 00:10:22.237 15:35:43 -- common/autotest_common.sh@931 -- # uname 00:10:22.237 15:35:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:22.237 15:35:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61950 00:10:22.237 15:35:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:22.237 killing process with pid 61950 00:10:22.237 15:35:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:22.237 15:35:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61950' 00:10:22.237 15:35:43 -- common/autotest_common.sh@945 -- # kill 61950 00:10:22.237 15:35:43 -- common/autotest_common.sh@950 -- # wait 61950 00:10:24.162 15:35:45 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:24.162 15:35:45 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:10:24.162 15:35:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:24.162 15:35:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:24.162 15:35:45 -- common/autotest_common.sh@10 -- # set +x 00:10:24.421 ************************************ 00:10:24.421 START TEST bdev_hello_world 00:10:24.421 ************************************ 00:10:24.421 15:35:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:10:24.421 [2024-07-24 15:35:45.840566] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:24.421 [2024-07-24 15:35:45.840701] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62616 ] 00:10:24.421 [2024-07-24 15:35:45.998417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.680 [2024-07-24 15:35:46.181988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.247 [2024-07-24 15:35:46.771926] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:25.247 [2024-07-24 15:35:46.771988] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:10:25.247 [2024-07-24 15:35:46.772016] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:25.247 [2024-07-24 15:35:46.774948] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:25.247 [2024-07-24 15:35:46.775678] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:25.247 [2024-07-24 15:35:46.775720] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:25.247 [2024-07-24 15:35:46.776031] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:25.247 00:10:25.247 [2024-07-24 15:35:46.776078] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:26.622 00:10:26.622 real 0m2.089s 00:10:26.622 user 0m1.785s 00:10:26.622 sys 0m0.194s 00:10:26.622 15:35:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.622 ************************************ 00:10:26.622 END TEST bdev_hello_world 00:10:26.622 15:35:47 -- common/autotest_common.sh@10 -- # set +x 00:10:26.622 ************************************ 00:10:26.622 15:35:47 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:10:26.622 15:35:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:26.622 15:35:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:26.622 15:35:47 -- common/autotest_common.sh@10 -- # set +x 00:10:26.622 ************************************ 00:10:26.622 START TEST bdev_bounds 00:10:26.622 ************************************ 00:10:26.622 15:35:47 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:10:26.622 15:35:47 -- bdev/blockdev.sh@288 -- # bdevio_pid=62659 00:10:26.622 15:35:47 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:26.622 15:35:47 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:26.622 Process bdevio pid: 62659 00:10:26.622 15:35:47 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 62659' 00:10:26.622 15:35:47 -- bdev/blockdev.sh@291 -- # waitforlisten 62659 00:10:26.622 15:35:47 -- common/autotest_common.sh@819 -- # '[' -z 62659 ']' 00:10:26.622 15:35:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.622 15:35:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:26.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.622 15:35:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.622 15:35:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:26.622 15:35:47 -- common/autotest_common.sh@10 -- # set +x 00:10:26.622 [2024-07-24 15:35:47.986551] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:26.622 [2024-07-24 15:35:47.986703] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62659 ] 00:10:26.622 [2024-07-24 15:35:48.148433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:26.880 [2024-07-24 15:35:48.335249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.880 [2024-07-24 15:35:48.335703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.880 [2024-07-24 15:35:48.335729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.277 15:35:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:28.277 15:35:49 -- common/autotest_common.sh@852 -- # return 0 00:10:28.277 15:35:49 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:28.277 I/O targets: 00:10:28.277 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:10:28.277 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:10:28.277 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:28.277 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:28.277 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:28.277 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:28.277 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:28.277 00:10:28.277 00:10:28.277 CUnit - A unit testing framework for C - Version 2.1-3 00:10:28.277 http://cunit.sourceforge.net/ 00:10:28.277 00:10:28.277 00:10:28.277 Suite: bdevio tests on: Nvme3n1 00:10:28.277 Test: blockdev write read block ...passed 00:10:28.277 Test: blockdev write zeroes read block ...passed 00:10:28.277 Test: blockdev write zeroes read no split ...passed 00:10:28.277 Test: blockdev write zeroes read split ...passed 00:10:28.277 Test: blockdev write zeroes read split partial ...passed 00:10:28.277 Test: blockdev reset ...[2024-07-24 15:35:49.791295] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:28.277 [2024-07-24 15:35:49.795889] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:28.277 passed 00:10:28.277 Test: blockdev write read 8 blocks ...passed 00:10:28.277 Test: blockdev write read size > 128k ...passed 00:10:28.277 Test: blockdev write read invalid size ...passed 00:10:28.277 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:28.277 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:28.277 Test: blockdev write read max offset ...passed 00:10:28.277 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:28.277 Test: blockdev writev readv 8 blocks ...passed 00:10:28.278 Test: blockdev writev readv 30 x 1block ...passed 00:10:28.278 Test: blockdev writev readv block ...passed 00:10:28.278 Test: blockdev writev readv size > 128k ...passed 00:10:28.278 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:28.278 Test: blockdev comparev and writev ...[2024-07-24 15:35:49.802878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26d00a000 len:0x1000 00:10:28.278 [2024-07-24 15:35:49.802938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:28.278 passed 00:10:28.278 Test: blockdev nvme passthru rw ...passed 00:10:28.278 Test: blockdev nvme passthru vendor specific ...passed 00:10:28.278 Test: blockdev nvme admin passthru ...[2024-07-24 15:35:49.803648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:28.278 [2024-07-24 15:35:49.803692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:28.278 passed 00:10:28.278 Test: blockdev copy ...passed 00:10:28.278 Suite: bdevio tests on: Nvme2n3 00:10:28.278 Test: blockdev write read block ...passed 00:10:28.278 Test: blockdev write zeroes read block ...passed 00:10:28.278 Test: blockdev write zeroes read no split ...passed 00:10:28.278 Test: blockdev write zeroes read split ...passed 00:10:28.278 Test: blockdev write zeroes read split partial ...passed 00:10:28.278 Test: blockdev reset ...[2024-07-24 15:35:49.868936] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:28.278 [2024-07-24 15:35:49.873176] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:28.278 passed 00:10:28.278 Test: blockdev write read 8 blocks ...passed 00:10:28.535 Test: blockdev write read size > 128k ...passed 00:10:28.535 Test: blockdev write read invalid size ...passed 00:10:28.535 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:28.535 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:28.535 Test: blockdev write read max offset ...passed 00:10:28.535 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:28.535 Test: blockdev writev readv 8 blocks ...passed 00:10:28.535 Test: blockdev writev readv 30 x 1block ...passed 00:10:28.535 Test: blockdev writev readv block ...passed 00:10:28.535 Test: blockdev writev readv size > 128k ...passed 00:10:28.535 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:28.535 Test: blockdev comparev and writev ...[2024-07-24 15:35:49.880678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x24c704000 len:0x1000 00:10:28.535 [2024-07-24 15:35:49.880735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:28.535 passed 00:10:28.535 Test: blockdev nvme passthru rw ...passed 00:10:28.535 Test: blockdev nvme passthru vendor specific ...passed 00:10:28.535 Test: blockdev nvme admin passthru ...[2024-07-24 15:35:49.881518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:28.535 [2024-07-24 15:35:49.881559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:28.535 passed 00:10:28.535 Test: blockdev copy ...passed 00:10:28.535 Suite: bdevio tests on: Nvme2n2 00:10:28.535 Test: blockdev write read block ...passed 00:10:28.535 Test: blockdev write zeroes read block ...passed 00:10:28.535 Test: blockdev write zeroes read no split ...passed 00:10:28.535 Test: blockdev write zeroes read split ...passed 00:10:28.535 Test: blockdev write zeroes read split partial ...passed 00:10:28.535 Test: blockdev reset ...[2024-07-24 15:35:49.946292] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:28.535 [2024-07-24 15:35:49.950356] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:28.535 passed 00:10:28.535 Test: blockdev write read 8 blocks ...passed 00:10:28.535 Test: blockdev write read size > 128k ...passed 00:10:28.535 Test: blockdev write read invalid size ...passed 00:10:28.535 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:28.535 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:28.535 Test: blockdev write read max offset ...passed 00:10:28.535 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:28.535 Test: blockdev writev readv 8 blocks ...passed 00:10:28.535 Test: blockdev writev readv 30 x 1block ...passed 00:10:28.536 Test: blockdev writev readv block ...passed 00:10:28.536 Test: blockdev writev readv size > 128k ...passed 00:10:28.536 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:28.536 Test: blockdev comparev and writev ...[2024-07-24 15:35:49.957850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x24c704000 len:0x1000 00:10:28.536 [2024-07-24 15:35:49.957907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:28.536 passed 00:10:28.536 Test: blockdev nvme passthru rw ...passed 00:10:28.536 Test: blockdev nvme passthru vendor specific ...passed 00:10:28.536 Test: blockdev nvme admin passthru ...[2024-07-24 15:35:49.958723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:28.536 [2024-07-24 15:35:49.958766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:28.536 passed 00:10:28.536 Test: blockdev copy ...passed 00:10:28.536 Suite: bdevio tests on: Nvme2n1 00:10:28.536 Test: blockdev write read block ...passed 00:10:28.536 Test: blockdev write zeroes read block ...passed 00:10:28.536 Test: blockdev write zeroes read no split ...passed 00:10:28.536 Test: blockdev write zeroes read split ...passed 00:10:28.536 Test: blockdev write zeroes read split partial ...passed 00:10:28.536 Test: blockdev reset ...[2024-07-24 15:35:50.024387] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:28.536 [2024-07-24 15:35:50.028557] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:28.536 passed 00:10:28.536 Test: blockdev write read 8 blocks ...passed 00:10:28.536 Test: blockdev write read size > 128k ...passed 00:10:28.536 Test: blockdev write read invalid size ...passed 00:10:28.536 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:28.536 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:28.536 Test: blockdev write read max offset ...passed 00:10:28.536 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:28.536 Test: blockdev writev readv 8 blocks ...passed 00:10:28.536 Test: blockdev writev readv 30 x 1block ...passed 00:10:28.536 Test: blockdev writev readv block ...passed 00:10:28.536 Test: blockdev writev readv size > 128k ...passed 00:10:28.536 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:28.536 Test: blockdev comparev and writev ...[2024-07-24 15:35:50.035975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27c23c000 len:0x1000 00:10:28.536 [2024-07-24 15:35:50.036035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:28.536 passed 00:10:28.536 Test: blockdev nvme passthru rw ...passed 00:10:28.536 Test: blockdev nvme passthru vendor specific ...[2024-07-24 15:35:50.036866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:28.536 [2024-07-24 15:35:50.036907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:28.536 passed 00:10:28.536 Test: blockdev nvme admin passthru ...passed 00:10:28.536 Test: blockdev copy ...passed 00:10:28.536 Suite: bdevio tests on: Nvme1n1 00:10:28.536 Test: blockdev write read block ...passed 00:10:28.536 Test: blockdev write zeroes read block ...passed 00:10:28.536 Test: blockdev write zeroes read no split ...passed 00:10:28.536 Test: blockdev write zeroes read split ...passed 00:10:28.536 Test: blockdev write zeroes read split partial ...passed 00:10:28.536 Test: blockdev reset ...[2024-07-24 15:35:50.101865] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:28.536 [2024-07-24 15:35:50.105485] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:28.536 passed 00:10:28.536 Test: blockdev write read 8 blocks ...passed 00:10:28.536 Test: blockdev write read size > 128k ...passed 00:10:28.536 Test: blockdev write read invalid size ...passed 00:10:28.536 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:28.536 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:28.536 Test: blockdev write read max offset ...passed 00:10:28.536 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:28.536 Test: blockdev writev readv 8 blocks ...passed 00:10:28.536 Test: blockdev writev readv 30 x 1block ...passed 00:10:28.536 Test: blockdev writev readv block ...passed 00:10:28.536 Test: blockdev writev readv size > 128k ...passed 00:10:28.536 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:28.536 Test: blockdev comparev and writev ...[2024-07-24 15:35:50.113120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27c238000 len:0x1000 00:10:28.536 [2024-07-24 15:35:50.113176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:28.536 passed 00:10:28.536 Test: blockdev nvme passthru rw ...passed 00:10:28.536 Test: blockdev nvme passthru vendor specific ...[2024-07-24 15:35:50.113960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:28.536 [2024-07-24 15:35:50.114002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:28.536 passed 00:10:28.536 Test: blockdev nvme admin passthru ...passed 00:10:28.536 Test: blockdev copy ...passed 00:10:28.536 Suite: bdevio tests on: Nvme0n1p2 00:10:28.536 Test: blockdev write read block ...passed 00:10:28.536 Test: blockdev write zeroes read block ...passed 00:10:28.536 Test: blockdev write zeroes read no split ...passed 00:10:28.794 Test: blockdev write zeroes read split ...passed 00:10:28.794 Test: blockdev write zeroes read split partial ...passed 00:10:28.794 Test: blockdev reset ...[2024-07-24 15:35:50.182005] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:28.794 [2024-07-24 15:35:50.185418] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:28.794 passed 00:10:28.794 Test: blockdev write read 8 blocks ...passed 00:10:28.794 Test: blockdev write read size > 128k ...passed 00:10:28.794 Test: blockdev write read invalid size ...passed 00:10:28.794 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:28.794 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:28.794 Test: blockdev write read max offset ...passed 00:10:28.794 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:28.794 Test: blockdev writev readv 8 blocks ...passed 00:10:28.794 Test: blockdev writev readv 30 x 1block ...passed 00:10:28.794 Test: blockdev writev readv block ...passed 00:10:28.794 Test: blockdev writev readv size > 128k ...passed 00:10:28.794 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:28.794 Test: blockdev comparev and writev ...passed 00:10:28.794 Test: blockdev nvme passthru rw ...passed 00:10:28.794 Test: blockdev nvme passthru vendor specific ...passed 00:10:28.794 Test: blockdev nvme admin passthru ...passed 00:10:28.794 Test: blockdev copy ...[2024-07-24 15:35:50.192242] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:10:28.794 separate metadata which is not supported yet. 00:10:28.794 passed 00:10:28.794 Suite: bdevio tests on: Nvme0n1p1 00:10:28.794 Test: blockdev write read block ...passed 00:10:28.794 Test: blockdev write zeroes read block ...passed 00:10:28.794 Test: blockdev write zeroes read no split ...passed 00:10:28.794 Test: blockdev write zeroes read split ...passed 00:10:28.795 Test: blockdev write zeroes read split partial ...passed 00:10:28.795 Test: blockdev reset ...[2024-07-24 15:35:50.246864] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:28.795 [2024-07-24 15:35:50.250348] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:28.795 passed 00:10:28.795 Test: blockdev write read 8 blocks ...passed 00:10:28.795 Test: blockdev write read size > 128k ...passed 00:10:28.795 Test: blockdev write read invalid size ...passed 00:10:28.795 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:28.795 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:28.795 Test: blockdev write read max offset ...passed 00:10:28.795 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:28.795 Test: blockdev writev readv 8 blocks ...passed 00:10:28.795 Test: blockdev writev readv 30 x 1block ...passed 00:10:28.795 Test: blockdev writev readv block ...passed 00:10:28.795 Test: blockdev writev readv size > 128k ...passed 00:10:28.795 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:28.795 Test: blockdev comparev and writev ...passed 00:10:28.795 Test: blockdev nvme passthru rw ...passed 00:10:28.795 Test: blockdev nvme passthru vendor specific ...passed 00:10:28.795 Test: blockdev nvme admin passthru ...passed 00:10:28.795 Test: blockdev copy ...[2024-07-24 15:35:50.257647] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:10:28.795 separate metadata which is not supported yet. 00:10:28.795 passed 00:10:28.795 00:10:28.795 Run Summary: Type Total Ran Passed Failed Inactive 00:10:28.795 suites 7 7 n/a 0 0 00:10:28.795 tests 161 161 161 0 0 00:10:28.795 asserts 1006 1006 1006 0 n/a 00:10:28.795 00:10:28.795 Elapsed time = 1.426 seconds 00:10:28.795 0 00:10:28.795 15:35:50 -- bdev/blockdev.sh@293 -- # killprocess 62659 00:10:28.795 15:35:50 -- common/autotest_common.sh@926 -- # '[' -z 62659 ']' 00:10:28.795 15:35:50 -- common/autotest_common.sh@930 -- # kill -0 62659 00:10:28.795 15:35:50 -- common/autotest_common.sh@931 -- # uname 00:10:28.795 15:35:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:28.795 15:35:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62659 00:10:28.795 15:35:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:28.795 15:35:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:28.795 15:35:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62659' 00:10:28.795 killing process with pid 62659 00:10:28.795 15:35:50 -- common/autotest_common.sh@945 -- # kill 62659 00:10:28.795 15:35:50 -- common/autotest_common.sh@950 -- # wait 62659 00:10:29.730 15:35:51 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:10:29.730 00:10:29.730 real 0m3.327s 00:10:29.730 user 0m8.815s 00:10:29.730 sys 0m0.352s 00:10:29.730 15:35:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.730 ************************************ 00:10:29.730 END TEST bdev_bounds 00:10:29.730 ************************************ 00:10:29.730 15:35:51 -- common/autotest_common.sh@10 -- # set +x 00:10:29.730 15:35:51 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:29.730 15:35:51 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:29.730 15:35:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:29.730 15:35:51 -- common/autotest_common.sh@10 -- # set +x 00:10:29.730 ************************************ 00:10:29.730 START TEST bdev_nbd 00:10:29.730 ************************************ 00:10:29.730 15:35:51 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:29.730 15:35:51 -- bdev/blockdev.sh@298 -- # uname -s 00:10:29.730 15:35:51 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:10:29.730 15:35:51 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:29.730 15:35:51 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:29.730 15:35:51 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:29.730 15:35:51 -- bdev/blockdev.sh@302 -- # local bdev_all 00:10:29.730 15:35:51 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:10:29.730 15:35:51 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:10:29.730 15:35:51 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:29.730 15:35:51 -- bdev/blockdev.sh@309 -- # local nbd_all 00:10:29.730 15:35:51 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:10:29.730 15:35:51 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:29.730 15:35:51 -- bdev/blockdev.sh@312 -- # local nbd_list 00:10:29.730 15:35:51 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:29.730 15:35:51 -- bdev/blockdev.sh@313 -- # local bdev_list 00:10:29.730 15:35:51 -- bdev/blockdev.sh@316 -- # nbd_pid=62726 00:10:29.730 15:35:51 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:29.730 15:35:51 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:29.730 15:35:51 -- bdev/blockdev.sh@318 -- # waitforlisten 62726 /var/tmp/spdk-nbd.sock 00:10:29.730 15:35:51 -- common/autotest_common.sh@819 -- # '[' -z 62726 ']' 00:10:29.730 15:35:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:29.730 15:35:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:29.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:29.730 15:35:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:29.730 15:35:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:29.730 15:35:51 -- common/autotest_common.sh@10 -- # set +x 00:10:29.988 [2024-07-24 15:35:51.361744] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:29.988 [2024-07-24 15:35:51.361892] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.988 [2024-07-24 15:35:51.540459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.247 [2024-07-24 15:35:51.741490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.623 15:35:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:31.623 15:35:53 -- common/autotest_common.sh@852 -- # return 0 00:10:31.623 15:35:53 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:31.623 15:35:53 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:31.623 15:35:53 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:31.623 15:35:53 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:31.623 15:35:53 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:31.623 15:35:53 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:31.623 15:35:53 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:31.623 15:35:53 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:31.623 15:35:53 -- bdev/nbd_common.sh@24 -- # local i 00:10:31.623 15:35:53 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:31.623 15:35:53 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:31.623 15:35:53 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:31.623 15:35:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:10:31.883 15:35:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:31.883 15:35:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:31.883 15:35:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:31.883 15:35:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:10:31.883 15:35:53 -- common/autotest_common.sh@857 -- # local i 00:10:31.883 15:35:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:31.883 15:35:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:31.883 15:35:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:10:31.883 15:35:53 -- common/autotest_common.sh@861 -- # break 00:10:31.883 15:35:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:31.883 15:35:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:31.883 15:35:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:31.883 1+0 records in 00:10:31.883 1+0 records out 00:10:31.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483644 s, 8.5 MB/s 00:10:31.883 15:35:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.883 15:35:53 -- common/autotest_common.sh@874 -- # size=4096 00:10:31.883 15:35:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.883 15:35:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:31.883 15:35:53 -- common/autotest_common.sh@877 -- # return 0 00:10:31.883 15:35:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:31.883 15:35:53 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:31.883 15:35:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:10:32.141 15:35:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:32.141 15:35:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:32.141 15:35:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:32.141 15:35:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:10:32.141 15:35:53 -- common/autotest_common.sh@857 -- # local i 00:10:32.142 15:35:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:32.142 15:35:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:32.142 15:35:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:10:32.142 15:35:53 -- common/autotest_common.sh@861 -- # break 00:10:32.142 15:35:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:32.142 15:35:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:32.142 15:35:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:32.142 1+0 records in 00:10:32.142 1+0 records out 00:10:32.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606564 s, 6.8 MB/s 00:10:32.142 15:35:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.142 15:35:53 -- common/autotest_common.sh@874 -- # size=4096 00:10:32.142 15:35:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.142 15:35:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:32.142 15:35:53 -- common/autotest_common.sh@877 -- # return 0 00:10:32.142 15:35:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:32.142 15:35:53 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:32.142 15:35:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:32.399 15:35:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:32.399 15:35:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:32.399 15:35:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:32.399 15:35:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:10:32.399 15:35:53 -- common/autotest_common.sh@857 -- # local i 00:10:32.399 15:35:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:32.399 15:35:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:32.399 15:35:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:10:32.399 15:35:53 -- common/autotest_common.sh@861 -- # break 00:10:32.399 15:35:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:32.399 15:35:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:32.399 15:35:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:32.399 1+0 records in 00:10:32.399 1+0 records out 00:10:32.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000684663 s, 6.0 MB/s 00:10:32.399 15:35:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.399 15:35:53 -- common/autotest_common.sh@874 -- # size=4096 00:10:32.399 15:35:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.399 15:35:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:32.399 15:35:53 -- common/autotest_common.sh@877 -- # return 0 00:10:32.399 15:35:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:32.399 15:35:53 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:32.399 15:35:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:32.657 15:35:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:32.657 15:35:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:32.657 15:35:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:32.657 15:35:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:10:32.657 15:35:54 -- common/autotest_common.sh@857 -- # local i 00:10:32.657 15:35:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:32.657 15:35:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:32.657 15:35:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:10:32.657 15:35:54 -- common/autotest_common.sh@861 -- # break 00:10:32.657 15:35:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:32.657 15:35:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:32.657 15:35:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:32.657 1+0 records in 00:10:32.657 1+0 records out 00:10:32.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577844 s, 7.1 MB/s 00:10:32.657 15:35:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.657 15:35:54 -- common/autotest_common.sh@874 -- # size=4096 00:10:32.657 15:35:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.657 15:35:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:32.657 15:35:54 -- common/autotest_common.sh@877 -- # return 0 00:10:32.657 15:35:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:32.657 15:35:54 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:32.657 15:35:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:32.915 15:35:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:32.915 15:35:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:32.915 15:35:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:32.915 15:35:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:10:32.915 15:35:54 -- common/autotest_common.sh@857 -- # local i 00:10:32.915 15:35:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:32.915 15:35:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:32.915 15:35:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:10:32.915 15:35:54 -- common/autotest_common.sh@861 -- # break 00:10:32.915 15:35:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:32.915 15:35:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:32.915 15:35:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:32.915 1+0 records in 00:10:32.915 1+0 records out 00:10:32.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811898 s, 5.0 MB/s 00:10:32.915 15:35:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.915 15:35:54 -- common/autotest_common.sh@874 -- # size=4096 00:10:32.915 15:35:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.915 15:35:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:32.915 15:35:54 -- common/autotest_common.sh@877 -- # return 0 00:10:32.915 15:35:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:32.915 15:35:54 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:32.915 15:35:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:33.481 15:35:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:33.481 15:35:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:33.481 15:35:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:33.481 15:35:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:10:33.481 15:35:54 -- common/autotest_common.sh@857 -- # local i 00:10:33.481 15:35:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:33.481 15:35:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:33.481 15:35:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:10:33.481 15:35:54 -- common/autotest_common.sh@861 -- # break 00:10:33.481 15:35:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:33.481 15:35:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:33.481 15:35:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:33.481 1+0 records in 00:10:33.481 1+0 records out 00:10:33.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613601 s, 6.7 MB/s 00:10:33.481 15:35:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.481 15:35:54 -- common/autotest_common.sh@874 -- # size=4096 00:10:33.481 15:35:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.481 15:35:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:33.481 15:35:54 -- common/autotest_common.sh@877 -- # return 0 00:10:33.481 15:35:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:33.481 15:35:54 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:33.481 15:35:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:33.481 15:35:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:33.481 15:35:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:33.739 15:35:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:33.739 15:35:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:10:33.739 15:35:55 -- common/autotest_common.sh@857 -- # local i 00:10:33.739 15:35:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:33.739 15:35:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:33.739 15:35:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:10:33.739 15:35:55 -- common/autotest_common.sh@861 -- # break 00:10:33.739 15:35:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:33.739 15:35:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:33.739 15:35:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:33.739 1+0 records in 00:10:33.739 1+0 records out 00:10:33.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000795637 s, 5.1 MB/s 00:10:33.739 15:35:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.739 15:35:55 -- common/autotest_common.sh@874 -- # size=4096 00:10:33.739 15:35:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.739 15:35:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:33.739 15:35:55 -- common/autotest_common.sh@877 -- # return 0 00:10:33.739 15:35:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:33.739 15:35:55 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:33.739 15:35:55 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:33.996 15:35:55 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:33.996 { 00:10:33.996 "nbd_device": "/dev/nbd0", 00:10:33.996 "bdev_name": "Nvme0n1p1" 00:10:33.996 }, 00:10:33.996 { 00:10:33.996 "nbd_device": "/dev/nbd1", 00:10:33.996 "bdev_name": "Nvme0n1p2" 00:10:33.996 }, 00:10:33.996 { 00:10:33.996 "nbd_device": "/dev/nbd2", 00:10:33.996 "bdev_name": "Nvme1n1" 00:10:33.996 }, 00:10:33.996 { 00:10:33.996 "nbd_device": "/dev/nbd3", 00:10:33.996 "bdev_name": "Nvme2n1" 00:10:33.996 }, 00:10:33.996 { 00:10:33.996 "nbd_device": "/dev/nbd4", 00:10:33.996 "bdev_name": "Nvme2n2" 00:10:33.996 }, 00:10:33.996 { 00:10:33.996 "nbd_device": "/dev/nbd5", 00:10:33.996 "bdev_name": "Nvme2n3" 00:10:33.996 }, 00:10:33.996 { 00:10:33.996 "nbd_device": "/dev/nbd6", 00:10:33.996 "bdev_name": "Nvme3n1" 00:10:33.996 } 00:10:33.996 ]' 00:10:33.997 15:35:55 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:33.997 15:35:55 -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:33.997 { 00:10:33.997 "nbd_device": "/dev/nbd0", 00:10:33.997 "bdev_name": "Nvme0n1p1" 00:10:33.997 }, 00:10:33.997 { 00:10:33.997 "nbd_device": "/dev/nbd1", 00:10:33.997 "bdev_name": "Nvme0n1p2" 00:10:33.997 }, 00:10:33.997 { 00:10:33.997 "nbd_device": "/dev/nbd2", 00:10:33.997 "bdev_name": "Nvme1n1" 00:10:33.997 }, 00:10:33.997 { 00:10:33.997 "nbd_device": "/dev/nbd3", 00:10:33.997 "bdev_name": "Nvme2n1" 00:10:33.997 }, 00:10:33.997 { 00:10:33.997 "nbd_device": "/dev/nbd4", 00:10:33.997 "bdev_name": "Nvme2n2" 00:10:33.997 }, 00:10:33.997 { 00:10:33.997 "nbd_device": "/dev/nbd5", 00:10:33.997 "bdev_name": "Nvme2n3" 00:10:33.997 }, 00:10:33.997 { 00:10:33.997 "nbd_device": "/dev/nbd6", 00:10:33.997 "bdev_name": "Nvme3n1" 00:10:33.997 } 00:10:33.997 ]' 00:10:33.997 15:35:55 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:33.997 15:35:55 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:10:33.997 15:35:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.997 15:35:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:10:33.997 15:35:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:33.997 15:35:55 -- bdev/nbd_common.sh@51 -- # local i 00:10:33.997 15:35:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:33.997 15:35:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:34.255 15:35:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:34.255 15:35:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:34.255 15:35:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:34.255 15:35:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.255 15:35:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.255 15:35:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:34.255 15:35:55 -- bdev/nbd_common.sh@41 -- # break 00:10:34.255 15:35:55 -- bdev/nbd_common.sh@45 -- # return 0 00:10:34.255 15:35:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.255 15:35:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:34.529 15:35:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:34.529 15:35:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:34.529 15:35:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:34.529 15:35:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.529 15:35:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.529 15:35:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:34.529 15:35:56 -- bdev/nbd_common.sh@41 -- # break 00:10:34.529 15:35:56 -- bdev/nbd_common.sh@45 -- # return 0 00:10:34.529 15:35:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.529 15:35:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:34.797 15:35:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:34.797 15:35:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:34.797 15:35:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:34.797 15:35:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.797 15:35:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.797 15:35:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:34.797 15:35:56 -- bdev/nbd_common.sh@41 -- # break 00:10:34.797 15:35:56 -- bdev/nbd_common.sh@45 -- # return 0 00:10:34.797 15:35:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.797 15:35:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:35.054 15:35:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:35.054 15:35:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:35.054 15:35:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:35.054 15:35:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.054 15:35:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.054 15:35:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:35.054 15:35:56 -- bdev/nbd_common.sh@41 -- # break 00:10:35.054 15:35:56 -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.054 15:35:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:35.054 15:35:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:35.311 15:35:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:35.311 15:35:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:35.311 15:35:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:35.311 15:35:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.311 15:35:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.311 15:35:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:35.311 15:35:56 -- bdev/nbd_common.sh@41 -- # break 00:10:35.311 15:35:56 -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.311 15:35:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:35.311 15:35:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:35.568 15:35:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:35.568 15:35:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:35.568 15:35:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:35.568 15:35:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.568 15:35:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.568 15:35:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:35.568 15:35:57 -- bdev/nbd_common.sh@41 -- # break 00:10:35.568 15:35:57 -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.568 15:35:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:35.568 15:35:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:10:35.826 15:35:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:10:35.826 15:35:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:10:35.826 15:35:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:10:35.826 15:35:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.826 15:35:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.826 15:35:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:10:35.826 15:35:57 -- bdev/nbd_common.sh@41 -- # break 00:10:35.826 15:35:57 -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.826 15:35:57 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:35.826 15:35:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:35.826 15:35:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@65 -- # true 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@65 -- # count=0 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@122 -- # count=0 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@127 -- # return 0 00:10:36.084 15:35:57 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@12 -- # local i 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:36.084 15:35:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:10:36.342 /dev/nbd0 00:10:36.342 15:35:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:36.342 15:35:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:36.342 15:35:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:10:36.342 15:35:57 -- common/autotest_common.sh@857 -- # local i 00:10:36.342 15:35:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:36.342 15:35:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:36.342 15:35:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:10:36.342 15:35:57 -- common/autotest_common.sh@861 -- # break 00:10:36.342 15:35:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:36.342 15:35:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:36.342 15:35:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:36.342 1+0 records in 00:10:36.342 1+0 records out 00:10:36.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347566 s, 11.8 MB/s 00:10:36.342 15:35:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.342 15:35:57 -- common/autotest_common.sh@874 -- # size=4096 00:10:36.342 15:35:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.342 15:35:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:36.342 15:35:57 -- common/autotest_common.sh@877 -- # return 0 00:10:36.342 15:35:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:36.342 15:35:57 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:36.342 15:35:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:10:36.600 /dev/nbd1 00:10:36.600 15:35:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:36.600 15:35:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:36.600 15:35:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:10:36.600 15:35:58 -- common/autotest_common.sh@857 -- # local i 00:10:36.600 15:35:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:36.600 15:35:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:36.600 15:35:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:10:36.600 15:35:58 -- common/autotest_common.sh@861 -- # break 00:10:36.600 15:35:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:36.600 15:35:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:36.600 15:35:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:36.600 1+0 records in 00:10:36.600 1+0 records out 00:10:36.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491134 s, 8.3 MB/s 00:10:36.600 15:35:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.600 15:35:58 -- common/autotest_common.sh@874 -- # size=4096 00:10:36.600 15:35:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.600 15:35:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:36.600 15:35:58 -- common/autotest_common.sh@877 -- # return 0 00:10:36.600 15:35:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:36.600 15:35:58 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:36.600 15:35:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:10:36.858 /dev/nbd10 00:10:36.858 15:35:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:36.858 15:35:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:36.858 15:35:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:10:36.858 15:35:58 -- common/autotest_common.sh@857 -- # local i 00:10:36.858 15:35:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:36.858 15:35:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:36.858 15:35:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:10:36.858 15:35:58 -- common/autotest_common.sh@861 -- # break 00:10:36.858 15:35:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:36.858 15:35:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:36.858 15:35:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:36.858 1+0 records in 00:10:36.858 1+0 records out 00:10:36.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620604 s, 6.6 MB/s 00:10:36.858 15:35:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.858 15:35:58 -- common/autotest_common.sh@874 -- # size=4096 00:10:36.858 15:35:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.858 15:35:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:36.858 15:35:58 -- common/autotest_common.sh@877 -- # return 0 00:10:36.858 15:35:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:36.858 15:35:58 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:36.858 15:35:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:10:37.116 /dev/nbd11 00:10:37.116 15:35:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:37.116 15:35:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:37.116 15:35:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:10:37.116 15:35:58 -- common/autotest_common.sh@857 -- # local i 00:10:37.116 15:35:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:37.116 15:35:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:37.116 15:35:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:10:37.116 15:35:58 -- common/autotest_common.sh@861 -- # break 00:10:37.116 15:35:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:37.116 15:35:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:37.116 15:35:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:37.116 1+0 records in 00:10:37.116 1+0 records out 00:10:37.116 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00069088 s, 5.9 MB/s 00:10:37.116 15:35:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.116 15:35:58 -- common/autotest_common.sh@874 -- # size=4096 00:10:37.116 15:35:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.116 15:35:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:37.116 15:35:58 -- common/autotest_common.sh@877 -- # return 0 00:10:37.116 15:35:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:37.116 15:35:58 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:37.116 15:35:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:10:37.374 /dev/nbd12 00:10:37.374 15:35:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:37.374 15:35:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:37.374 15:35:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:10:37.374 15:35:58 -- common/autotest_common.sh@857 -- # local i 00:10:37.374 15:35:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:37.374 15:35:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:37.374 15:35:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:10:37.374 15:35:58 -- common/autotest_common.sh@861 -- # break 00:10:37.374 15:35:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:37.374 15:35:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:37.374 15:35:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:37.374 1+0 records in 00:10:37.374 1+0 records out 00:10:37.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618889 s, 6.6 MB/s 00:10:37.374 15:35:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.374 15:35:58 -- common/autotest_common.sh@874 -- # size=4096 00:10:37.374 15:35:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.374 15:35:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:37.374 15:35:58 -- common/autotest_common.sh@877 -- # return 0 00:10:37.374 15:35:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:37.374 15:35:58 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:37.374 15:35:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:10:37.632 /dev/nbd13 00:10:37.632 15:35:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:37.632 15:35:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:37.632 15:35:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:10:37.632 15:35:59 -- common/autotest_common.sh@857 -- # local i 00:10:37.632 15:35:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:37.632 15:35:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:37.632 15:35:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:10:37.632 15:35:59 -- common/autotest_common.sh@861 -- # break 00:10:37.632 15:35:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:37.632 15:35:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:37.632 15:35:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:37.632 1+0 records in 00:10:37.632 1+0 records out 00:10:37.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729912 s, 5.6 MB/s 00:10:37.632 15:35:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.632 15:35:59 -- common/autotest_common.sh@874 -- # size=4096 00:10:37.632 15:35:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.632 15:35:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:37.632 15:35:59 -- common/autotest_common.sh@877 -- # return 0 00:10:37.632 15:35:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:37.632 15:35:59 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:37.632 15:35:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:10:37.897 /dev/nbd14 00:10:37.897 15:35:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:10:37.898 15:35:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:10:37.898 15:35:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:10:37.898 15:35:59 -- common/autotest_common.sh@857 -- # local i 00:10:37.898 15:35:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:37.898 15:35:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:37.898 15:35:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:10:37.898 15:35:59 -- common/autotest_common.sh@861 -- # break 00:10:37.898 15:35:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:37.898 15:35:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:37.898 15:35:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:37.898 1+0 records in 00:10:37.898 1+0 records out 00:10:37.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00068809 s, 6.0 MB/s 00:10:37.898 15:35:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.898 15:35:59 -- common/autotest_common.sh@874 -- # size=4096 00:10:37.898 15:35:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.898 15:35:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:37.898 15:35:59 -- common/autotest_common.sh@877 -- # return 0 00:10:37.898 15:35:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:37.898 15:35:59 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:37.898 15:35:59 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:37.898 15:35:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:37.898 15:35:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:38.159 15:35:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:38.159 { 00:10:38.159 "nbd_device": "/dev/nbd0", 00:10:38.159 "bdev_name": "Nvme0n1p1" 00:10:38.159 }, 00:10:38.159 { 00:10:38.159 "nbd_device": "/dev/nbd1", 00:10:38.159 "bdev_name": "Nvme0n1p2" 00:10:38.159 }, 00:10:38.159 { 00:10:38.159 "nbd_device": "/dev/nbd10", 00:10:38.159 "bdev_name": "Nvme1n1" 00:10:38.159 }, 00:10:38.159 { 00:10:38.159 "nbd_device": "/dev/nbd11", 00:10:38.159 "bdev_name": "Nvme2n1" 00:10:38.159 }, 00:10:38.159 { 00:10:38.159 "nbd_device": "/dev/nbd12", 00:10:38.159 "bdev_name": "Nvme2n2" 00:10:38.159 }, 00:10:38.159 { 00:10:38.159 "nbd_device": "/dev/nbd13", 00:10:38.159 "bdev_name": "Nvme2n3" 00:10:38.159 }, 00:10:38.159 { 00:10:38.159 "nbd_device": "/dev/nbd14", 00:10:38.159 "bdev_name": "Nvme3n1" 00:10:38.159 } 00:10:38.159 ]' 00:10:38.159 15:35:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:38.159 15:35:59 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:38.159 { 00:10:38.159 "nbd_device": "/dev/nbd0", 00:10:38.159 "bdev_name": "Nvme0n1p1" 00:10:38.159 }, 00:10:38.159 { 00:10:38.159 "nbd_device": "/dev/nbd1", 00:10:38.159 "bdev_name": "Nvme0n1p2" 00:10:38.159 }, 00:10:38.159 { 00:10:38.159 "nbd_device": "/dev/nbd10", 00:10:38.159 "bdev_name": "Nvme1n1" 00:10:38.159 }, 00:10:38.159 { 00:10:38.159 "nbd_device": "/dev/nbd11", 00:10:38.159 "bdev_name": "Nvme2n1" 00:10:38.159 }, 00:10:38.159 { 00:10:38.159 "nbd_device": "/dev/nbd12", 00:10:38.159 "bdev_name": "Nvme2n2" 00:10:38.159 }, 00:10:38.159 { 00:10:38.159 "nbd_device": "/dev/nbd13", 00:10:38.159 "bdev_name": "Nvme2n3" 00:10:38.159 }, 00:10:38.159 { 00:10:38.159 "nbd_device": "/dev/nbd14", 00:10:38.159 "bdev_name": "Nvme3n1" 00:10:38.159 } 00:10:38.159 ]' 00:10:38.159 15:35:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:38.159 /dev/nbd1 00:10:38.159 /dev/nbd10 00:10:38.159 /dev/nbd11 00:10:38.159 /dev/nbd12 00:10:38.159 /dev/nbd13 00:10:38.159 /dev/nbd14' 00:10:38.159 15:35:59 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:38.159 /dev/nbd1 00:10:38.159 /dev/nbd10 00:10:38.159 /dev/nbd11 00:10:38.159 /dev/nbd12 00:10:38.160 /dev/nbd13 00:10:38.160 /dev/nbd14' 00:10:38.160 15:35:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:38.418 15:35:59 -- bdev/nbd_common.sh@65 -- # count=7 00:10:38.418 15:35:59 -- bdev/nbd_common.sh@66 -- # echo 7 00:10:38.418 15:35:59 -- bdev/nbd_common.sh@95 -- # count=7 00:10:38.418 15:35:59 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:10:38.418 15:35:59 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:10:38.418 15:35:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:38.418 15:35:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:38.418 15:35:59 -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:38.418 15:35:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:38.418 15:35:59 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:38.418 15:35:59 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:38.418 256+0 records in 00:10:38.418 256+0 records out 00:10:38.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00704273 s, 149 MB/s 00:10:38.418 15:35:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:38.418 15:35:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:38.418 256+0 records in 00:10:38.418 256+0 records out 00:10:38.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147059 s, 7.1 MB/s 00:10:38.418 15:35:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:38.418 15:35:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:38.675 256+0 records in 00:10:38.675 256+0 records out 00:10:38.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154801 s, 6.8 MB/s 00:10:38.675 15:36:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:38.675 15:36:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:38.675 256+0 records in 00:10:38.675 256+0 records out 00:10:38.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167195 s, 6.3 MB/s 00:10:38.675 15:36:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:38.675 15:36:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:38.933 256+0 records in 00:10:38.933 256+0 records out 00:10:38.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153707 s, 6.8 MB/s 00:10:38.933 15:36:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:38.933 15:36:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:39.190 256+0 records in 00:10:39.190 256+0 records out 00:10:39.190 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148381 s, 7.1 MB/s 00:10:39.190 15:36:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:39.190 15:36:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:39.190 256+0 records in 00:10:39.190 256+0 records out 00:10:39.190 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152514 s, 6.9 MB/s 00:10:39.190 15:36:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:39.190 15:36:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:10:39.448 256+0 records in 00:10:39.448 256+0 records out 00:10:39.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165795 s, 6.3 MB/s 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@51 -- # local i 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.448 15:36:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:39.706 15:36:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:39.706 15:36:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:39.706 15:36:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:39.706 15:36:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.706 15:36:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.706 15:36:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:39.706 15:36:01 -- bdev/nbd_common.sh@41 -- # break 00:10:39.706 15:36:01 -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.706 15:36:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.706 15:36:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:39.964 15:36:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:39.964 15:36:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:39.964 15:36:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:39.964 15:36:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.964 15:36:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.964 15:36:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:39.964 15:36:01 -- bdev/nbd_common.sh@41 -- # break 00:10:39.964 15:36:01 -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.964 15:36:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.964 15:36:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:40.223 15:36:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:40.223 15:36:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:40.223 15:36:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:40.223 15:36:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.223 15:36:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.223 15:36:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:40.223 15:36:01 -- bdev/nbd_common.sh@41 -- # break 00:10:40.223 15:36:01 -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.223 15:36:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.223 15:36:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:40.481 15:36:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:40.481 15:36:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:40.481 15:36:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:40.481 15:36:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.481 15:36:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.481 15:36:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:40.481 15:36:01 -- bdev/nbd_common.sh@41 -- # break 00:10:40.481 15:36:01 -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.481 15:36:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.481 15:36:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:40.738 15:36:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:40.738 15:36:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:40.738 15:36:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:40.738 15:36:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.738 15:36:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.738 15:36:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:40.738 15:36:02 -- bdev/nbd_common.sh@41 -- # break 00:10:40.738 15:36:02 -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.738 15:36:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.738 15:36:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:40.997 15:36:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:40.997 15:36:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:40.997 15:36:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:40.997 15:36:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.997 15:36:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.997 15:36:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:40.997 15:36:02 -- bdev/nbd_common.sh@41 -- # break 00:10:40.997 15:36:02 -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.997 15:36:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.997 15:36:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:10:41.254 15:36:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:10:41.254 15:36:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:10:41.254 15:36:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:10:41.254 15:36:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.254 15:36:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.254 15:36:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:10:41.254 15:36:02 -- bdev/nbd_common.sh@41 -- # break 00:10:41.254 15:36:02 -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.254 15:36:02 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:41.254 15:36:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.255 15:36:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:41.512 15:36:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:41.512 15:36:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:41.512 15:36:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:41.771 15:36:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:41.771 15:36:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:41.771 15:36:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:41.771 15:36:03 -- bdev/nbd_common.sh@65 -- # true 00:10:41.771 15:36:03 -- bdev/nbd_common.sh@65 -- # count=0 00:10:41.771 15:36:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:41.771 15:36:03 -- bdev/nbd_common.sh@104 -- # count=0 00:10:41.771 15:36:03 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:41.771 15:36:03 -- bdev/nbd_common.sh@109 -- # return 0 00:10:41.771 15:36:03 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:41.771 15:36:03 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.771 15:36:03 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:41.771 15:36:03 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:10:41.771 15:36:03 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:10:41.771 15:36:03 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:42.028 malloc_lvol_verify 00:10:42.028 15:36:03 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:42.286 fa153131-8f4d-46f3-beed-07713bfe128f 00:10:42.286 15:36:03 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:42.544 6e6010c0-0031-43f9-aa6b-2e81fb5e04f8 00:10:42.544 15:36:03 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:42.544 /dev/nbd0 00:10:42.803 15:36:04 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:10:42.803 mke2fs 1.46.5 (30-Dec-2021) 00:10:42.803 Discarding device blocks: 0/4096 done 00:10:42.803 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:42.803 00:10:42.803 Allocating group tables: 0/1 done 00:10:42.803 Writing inode tables: 0/1 done 00:10:42.803 Creating journal (1024 blocks): done 00:10:42.803 Writing superblocks and filesystem accounting information: 0/1 done 00:10:42.803 00:10:42.803 15:36:04 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:10:42.803 15:36:04 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:42.803 15:36:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.803 15:36:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:42.803 15:36:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:42.803 15:36:04 -- bdev/nbd_common.sh@51 -- # local i 00:10:42.803 15:36:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:42.803 15:36:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:42.803 15:36:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:42.803 15:36:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:42.803 15:36:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:43.062 15:36:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:43.062 15:36:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:43.062 15:36:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:43.062 15:36:04 -- bdev/nbd_common.sh@41 -- # break 00:10:43.062 15:36:04 -- bdev/nbd_common.sh@45 -- # return 0 00:10:43.062 15:36:04 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:10:43.062 15:36:04 -- bdev/nbd_common.sh@147 -- # return 0 00:10:43.062 15:36:04 -- bdev/blockdev.sh@324 -- # killprocess 62726 00:10:43.062 15:36:04 -- common/autotest_common.sh@926 -- # '[' -z 62726 ']' 00:10:43.062 15:36:04 -- common/autotest_common.sh@930 -- # kill -0 62726 00:10:43.062 15:36:04 -- common/autotest_common.sh@931 -- # uname 00:10:43.062 15:36:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:43.062 15:36:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62726 00:10:43.062 15:36:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:43.062 15:36:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:43.062 killing process with pid 62726 00:10:43.062 15:36:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62726' 00:10:43.062 15:36:04 -- common/autotest_common.sh@945 -- # kill 62726 00:10:43.062 15:36:04 -- common/autotest_common.sh@950 -- # wait 62726 00:10:43.995 15:36:05 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:10:43.995 00:10:43.995 real 0m14.313s 00:10:43.995 user 0m20.293s 00:10:43.995 sys 0m4.338s 00:10:43.995 15:36:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.995 15:36:05 -- common/autotest_common.sh@10 -- # set +x 00:10:43.995 ************************************ 00:10:43.995 END TEST bdev_nbd 00:10:43.995 ************************************ 00:10:44.253 15:36:05 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:10:44.253 15:36:05 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:10:44.253 15:36:05 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:10:44.253 skipping fio tests on NVMe due to multi-ns failures. 00:10:44.253 15:36:05 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:44.253 15:36:05 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:44.253 15:36:05 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:44.253 15:36:05 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:10:44.253 15:36:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:44.253 15:36:05 -- common/autotest_common.sh@10 -- # set +x 00:10:44.253 ************************************ 00:10:44.253 START TEST bdev_verify 00:10:44.253 ************************************ 00:10:44.253 15:36:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:44.253 [2024-07-24 15:36:05.746358] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:44.253 [2024-07-24 15:36:05.746519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63179 ] 00:10:44.511 [2024-07-24 15:36:05.916684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:44.511 [2024-07-24 15:36:06.100794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.511 [2024-07-24 15:36:06.100796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.443 Running I/O for 5 seconds... 00:10:50.703 00:10:50.703 Latency(us) 00:10:50.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.703 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:50.703 Verification LBA range: start 0x0 length 0x5e800 00:10:50.703 Nvme0n1p1 : 5.06 2147.68 8.39 0.00 0.00 59415.13 7596.22 72447.07 00:10:50.703 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:50.703 Verification LBA range: start 0x5e800 length 0x5e800 00:10:50.703 Nvme0n1p1 : 5.06 2129.60 8.32 0.00 0.00 59890.39 8102.63 63867.81 00:10:50.703 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:50.703 Verification LBA range: start 0x0 length 0x5e7ff 00:10:50.703 Nvme0n1p2 : 5.06 2146.30 8.38 0.00 0.00 59376.86 9472.93 71017.19 00:10:50.703 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:50.703 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:10:50.703 Nvme0n1p2 : 5.06 2128.21 8.31 0.00 0.00 59851.94 9889.98 60769.75 00:10:50.703 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:50.703 Verification LBA range: start 0x0 length 0xa0000 00:10:50.703 Nvme1n1 : 5.07 2150.29 8.40 0.00 0.00 59131.14 4498.15 58148.31 00:10:50.703 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:50.703 Verification LBA range: start 0xa0000 length 0xa0000 00:10:50.703 Nvme1n1 : 5.06 2126.83 8.31 0.00 0.00 59819.84 12034.79 57909.99 00:10:50.703 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:50.703 Verification LBA range: start 0x0 length 0x80000 00:10:50.703 Nvme2n1 : 5.07 2149.58 8.40 0.00 0.00 59084.60 5391.83 55765.18 00:10:50.703 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:50.703 Verification LBA range: start 0x80000 length 0x80000 00:10:50.703 Nvme2n1 : 5.07 2133.45 8.33 0.00 0.00 59593.18 4021.53 55765.18 00:10:50.703 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:50.704 Verification LBA range: start 0x0 length 0x80000 00:10:50.704 Nvme2n2 : 5.07 2148.10 8.39 0.00 0.00 59028.54 7685.59 52667.11 00:10:50.704 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:50.704 Verification LBA range: start 0x80000 length 0x80000 00:10:50.704 Nvme2n2 : 5.07 2131.91 8.33 0.00 0.00 59544.52 6404.65 55765.18 00:10:50.704 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:50.704 Verification LBA range: start 0x0 length 0x80000 00:10:50.704 Nvme2n3 : 5.08 2146.71 8.39 0.00 0.00 58987.04 9830.40 52667.11 00:10:50.704 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:50.704 Verification LBA range: start 0x80000 length 0x80000 00:10:50.704 Nvme2n3 : 5.08 2130.45 8.32 0.00 0.00 59512.18 8579.26 57195.05 00:10:50.704 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:50.704 Verification LBA range: start 0x0 length 0x20000 00:10:50.704 Nvme3n1 : 5.08 2145.58 8.38 0.00 0.00 58962.80 11617.75 51713.86 00:10:50.704 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:50.704 Verification LBA range: start 0x20000 length 0x20000 00:10:50.704 Nvme3n1 : 5.08 2129.23 8.32 0.00 0.00 59484.53 10545.34 57195.05 00:10:50.704 =================================================================================================================== 00:10:50.704 Total : 29943.90 116.97 0.00 0.00 59404.48 4021.53 72447.07 00:10:52.602 00:10:52.602 real 0m8.519s 00:10:52.602 user 0m15.700s 00:10:52.602 sys 0m0.266s 00:10:52.602 15:36:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.602 15:36:14 -- common/autotest_common.sh@10 -- # set +x 00:10:52.602 ************************************ 00:10:52.602 END TEST bdev_verify 00:10:52.602 ************************************ 00:10:52.602 15:36:14 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:52.602 15:36:14 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:10:52.602 15:36:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:52.602 15:36:14 -- common/autotest_common.sh@10 -- # set +x 00:10:52.859 ************************************ 00:10:52.859 START TEST bdev_verify_big_io 00:10:52.859 ************************************ 00:10:52.859 15:36:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:52.859 [2024-07-24 15:36:14.280307] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:52.859 [2024-07-24 15:36:14.280451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63288 ] 00:10:52.859 [2024-07-24 15:36:14.443751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:53.117 [2024-07-24 15:36:14.670467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.117 [2024-07-24 15:36:14.670476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.050 Running I/O for 5 seconds... 00:11:00.608 00:11:00.608 Latency(us) 00:11:00.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.608 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:00.608 Verification LBA range: start 0x0 length 0x5e80 00:11:00.608 Nvme0n1p1 : 5.41 224.22 14.01 0.00 0.00 560268.51 55050.24 743535.71 00:11:00.608 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:00.608 Verification LBA range: start 0x5e80 length 0x5e80 00:11:00.609 Nvme0n1p1 : 5.45 222.87 13.93 0.00 0.00 565601.69 33602.09 754974.72 00:11:00.609 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:00.609 Verification LBA range: start 0x0 length 0x5e7f 00:11:00.609 Nvme0n1p2 : 5.42 224.12 14.01 0.00 0.00 553897.28 54811.93 690153.66 00:11:00.609 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:00.609 Verification LBA range: start 0x5e7f length 0x5e7f 00:11:00.609 Nvme0n1p2 : 5.45 222.78 13.92 0.00 0.00 557510.05 32887.16 693966.66 00:11:00.609 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:00.609 Verification LBA range: start 0x0 length 0xa000 00:11:00.609 Nvme1n1 : 5.42 224.02 14.00 0.00 0.00 546604.59 55526.87 648210.62 00:11:00.609 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:00.609 Verification LBA range: start 0xa000 length 0xa000 00:11:00.609 Nvme1n1 : 5.45 222.68 13.92 0.00 0.00 550257.41 33840.41 640584.61 00:11:00.609 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:00.609 Verification LBA range: start 0x0 length 0x8000 00:11:00.609 Nvme2n1 : 5.45 229.91 14.37 0.00 0.00 527025.65 28120.90 591015.56 00:11:00.609 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:00.609 Verification LBA range: start 0x8000 length 0x8000 00:11:00.609 Nvme2n1 : 5.46 222.54 13.91 0.00 0.00 542948.24 35508.60 606267.58 00:11:00.609 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:00.609 Verification LBA range: start 0x0 length 0x8000 00:11:00.609 Nvme2n2 : 5.45 229.82 14.36 0.00 0.00 520015.01 28955.00 575763.55 00:11:00.609 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:00.609 Verification LBA range: start 0x8000 length 0x8000 00:11:00.609 Nvme2n2 : 5.47 229.04 14.31 0.00 0.00 521720.72 12809.31 613893.59 00:11:00.609 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:00.609 Verification LBA range: start 0x0 length 0x8000 00:11:00.609 Nvme2n3 : 5.46 237.60 14.85 0.00 0.00 498684.16 2666.12 583389.56 00:11:00.609 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:00.609 Verification LBA range: start 0x8000 length 0x8000 00:11:00.609 Nvme2n3 : 5.47 228.97 14.31 0.00 0.00 514342.20 13047.62 831234.79 00:11:00.609 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:00.609 Verification LBA range: start 0x0 length 0x2000 00:11:00.609 Nvme3n1 : 5.46 246.87 15.43 0.00 0.00 474101.96 2829.96 823608.79 00:11:00.609 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:00.609 Verification LBA range: start 0x2000 length 0x2000 00:11:00.609 Nvme3n1 : 5.49 245.84 15.36 0.00 0.00 473914.34 6404.65 777852.74 00:11:00.609 =================================================================================================================== 00:11:00.609 Total : 3211.28 200.71 0.00 0.00 527996.61 2666.12 831234.79 00:11:01.175 00:11:01.175 real 0m8.508s 00:11:01.175 user 0m15.674s 00:11:01.175 sys 0m0.265s 00:11:01.175 15:36:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.175 15:36:22 -- common/autotest_common.sh@10 -- # set +x 00:11:01.175 ************************************ 00:11:01.175 END TEST bdev_verify_big_io 00:11:01.175 ************************************ 00:11:01.175 15:36:22 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:01.175 15:36:22 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:01.175 15:36:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:01.175 15:36:22 -- common/autotest_common.sh@10 -- # set +x 00:11:01.175 ************************************ 00:11:01.175 START TEST bdev_write_zeroes 00:11:01.175 ************************************ 00:11:01.175 15:36:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:01.433 [2024-07-24 15:36:22.859752] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:01.433 [2024-07-24 15:36:22.859920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63396 ] 00:11:01.434 [2024-07-24 15:36:23.023398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.692 [2024-07-24 15:36:23.210044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.269 Running I/O for 1 seconds... 00:11:03.640 00:11:03.640 Latency(us) 00:11:03.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:03.640 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.640 Nvme0n1p1 : 1.02 6842.69 26.73 0.00 0.00 18642.69 12153.95 41943.04 00:11:03.640 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.640 Nvme0n1p2 : 1.02 6833.13 26.69 0.00 0.00 18637.24 13226.36 30504.03 00:11:03.640 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.640 Nvme1n1 : 1.02 6824.56 26.66 0.00 0.00 18598.01 12213.53 29074.15 00:11:03.640 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.640 Nvme2n1 : 1.02 6816.02 26.63 0.00 0.00 18521.53 11796.48 26333.56 00:11:03.640 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.640 Nvme2n2 : 1.02 6807.35 26.59 0.00 0.00 18506.66 12094.37 26333.56 00:11:03.640 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.640 Nvme2n3 : 1.03 6854.43 26.78 0.00 0.00 18410.71 9055.88 26571.87 00:11:03.640 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.640 Nvme3n1 : 1.03 6845.81 26.74 0.00 0.00 18394.66 9472.93 25856.93 00:11:03.640 =================================================================================================================== 00:11:03.640 Total : 47824.00 186.81 0.00 0.00 18529.88 9055.88 41943.04 00:11:04.599 00:11:04.599 real 0m3.223s 00:11:04.599 user 0m2.886s 00:11:04.599 sys 0m0.216s 00:11:04.599 ************************************ 00:11:04.599 END TEST bdev_write_zeroes 00:11:04.599 15:36:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.599 15:36:25 -- common/autotest_common.sh@10 -- # set +x 00:11:04.599 ************************************ 00:11:04.599 15:36:26 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:04.599 15:36:26 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:04.599 15:36:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:04.599 15:36:26 -- common/autotest_common.sh@10 -- # set +x 00:11:04.599 ************************************ 00:11:04.599 START TEST bdev_json_nonenclosed 00:11:04.599 ************************************ 00:11:04.599 15:36:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:04.599 [2024-07-24 15:36:26.121346] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:04.599 [2024-07-24 15:36:26.121524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63451 ] 00:11:04.857 [2024-07-24 15:36:26.291058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.115 [2024-07-24 15:36:26.465339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.115 [2024-07-24 15:36:26.465552] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:05.115 [2024-07-24 15:36:26.465582] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:05.373 00:11:05.373 real 0m0.839s 00:11:05.373 user 0m0.604s 00:11:05.373 sys 0m0.130s 00:11:05.373 ************************************ 00:11:05.373 END TEST bdev_json_nonenclosed 00:11:05.373 ************************************ 00:11:05.373 15:36:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.373 15:36:26 -- common/autotest_common.sh@10 -- # set +x 00:11:05.373 15:36:26 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:05.373 15:36:26 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:05.373 15:36:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:05.373 15:36:26 -- common/autotest_common.sh@10 -- # set +x 00:11:05.373 ************************************ 00:11:05.373 START TEST bdev_json_nonarray 00:11:05.373 ************************************ 00:11:05.373 15:36:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:05.631 [2024-07-24 15:36:27.010753] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:05.631 [2024-07-24 15:36:27.010919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63476 ] 00:11:05.631 [2024-07-24 15:36:27.185533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.889 [2024-07-24 15:36:27.366889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.889 [2024-07-24 15:36:27.367152] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:05.889 [2024-07-24 15:36:27.367182] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:06.148 00:11:06.148 real 0m0.827s 00:11:06.148 user 0m0.585s 00:11:06.148 sys 0m0.135s 00:11:06.148 ************************************ 00:11:06.148 END TEST bdev_json_nonarray 00:11:06.148 ************************************ 00:11:06.148 15:36:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.148 15:36:27 -- common/autotest_common.sh@10 -- # set +x 00:11:06.406 15:36:27 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:11:06.406 15:36:27 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:11:06.406 15:36:27 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:11:06.406 15:36:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:06.406 15:36:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:06.406 15:36:27 -- common/autotest_common.sh@10 -- # set +x 00:11:06.406 ************************************ 00:11:06.406 START TEST bdev_gpt_uuid 00:11:06.406 ************************************ 00:11:06.406 15:36:27 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:11:06.406 15:36:27 -- bdev/blockdev.sh@612 -- # local bdev 00:11:06.406 15:36:27 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:11:06.406 15:36:27 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=63507 00:11:06.406 15:36:27 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:06.406 15:36:27 -- bdev/blockdev.sh@47 -- # waitforlisten 63507 00:11:06.406 15:36:27 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:06.406 15:36:27 -- common/autotest_common.sh@819 -- # '[' -z 63507 ']' 00:11:06.406 15:36:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.406 15:36:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:06.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.406 15:36:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.406 15:36:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:06.406 15:36:27 -- common/autotest_common.sh@10 -- # set +x 00:11:06.406 [2024-07-24 15:36:27.903756] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:06.406 [2024-07-24 15:36:27.903917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63507 ] 00:11:06.664 [2024-07-24 15:36:28.071066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.664 [2024-07-24 15:36:28.254369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:06.664 [2024-07-24 15:36:28.254592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.038 15:36:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:08.038 15:36:29 -- common/autotest_common.sh@852 -- # return 0 00:11:08.038 15:36:29 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:08.038 15:36:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:08.038 15:36:29 -- common/autotest_common.sh@10 -- # set +x 00:11:08.603 Some configs were skipped because the RPC state that can call them passed over. 00:11:08.603 15:36:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:08.603 15:36:29 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:11:08.603 15:36:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:08.603 15:36:29 -- common/autotest_common.sh@10 -- # set +x 00:11:08.603 15:36:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:08.603 15:36:29 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:11:08.603 15:36:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:08.603 15:36:29 -- common/autotest_common.sh@10 -- # set +x 00:11:08.603 15:36:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:08.603 15:36:29 -- bdev/blockdev.sh@619 -- # bdev='[ 00:11:08.603 { 00:11:08.603 "name": "Nvme0n1p1", 00:11:08.603 "aliases": [ 00:11:08.603 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:11:08.603 ], 00:11:08.603 "product_name": "GPT Disk", 00:11:08.603 "block_size": 4096, 00:11:08.603 "num_blocks": 774144, 00:11:08.603 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:08.603 "md_size": 64, 00:11:08.603 "md_interleave": false, 00:11:08.603 "dif_type": 0, 00:11:08.603 "assigned_rate_limits": { 00:11:08.603 "rw_ios_per_sec": 0, 00:11:08.603 "rw_mbytes_per_sec": 0, 00:11:08.603 "r_mbytes_per_sec": 0, 00:11:08.603 "w_mbytes_per_sec": 0 00:11:08.603 }, 00:11:08.603 "claimed": false, 00:11:08.603 "zoned": false, 00:11:08.603 "supported_io_types": { 00:11:08.603 "read": true, 00:11:08.603 "write": true, 00:11:08.603 "unmap": true, 00:11:08.603 "write_zeroes": true, 00:11:08.603 "flush": true, 00:11:08.603 "reset": true, 00:11:08.603 "compare": true, 00:11:08.603 "compare_and_write": false, 00:11:08.603 "abort": true, 00:11:08.603 "nvme_admin": false, 00:11:08.603 "nvme_io": false 00:11:08.603 }, 00:11:08.603 "driver_specific": { 00:11:08.603 "gpt": { 00:11:08.603 "base_bdev": "Nvme0n1", 00:11:08.603 "offset_blocks": 256, 00:11:08.603 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:11:08.603 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:08.603 "partition_name": "SPDK_TEST_first" 00:11:08.603 } 00:11:08.603 } 00:11:08.603 } 00:11:08.603 ]' 00:11:08.603 15:36:29 -- bdev/blockdev.sh@620 -- # jq -r length 00:11:08.603 15:36:29 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:11:08.603 15:36:29 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:11:08.604 15:36:30 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:08.604 15:36:30 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:08.604 15:36:30 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:08.604 15:36:30 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:08.604 15:36:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:08.604 15:36:30 -- common/autotest_common.sh@10 -- # set +x 00:11:08.604 15:36:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:08.604 15:36:30 -- bdev/blockdev.sh@624 -- # bdev='[ 00:11:08.604 { 00:11:08.604 "name": "Nvme0n1p2", 00:11:08.604 "aliases": [ 00:11:08.604 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:11:08.604 ], 00:11:08.604 "product_name": "GPT Disk", 00:11:08.604 "block_size": 4096, 00:11:08.604 "num_blocks": 774143, 00:11:08.604 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:08.604 "md_size": 64, 00:11:08.604 "md_interleave": false, 00:11:08.604 "dif_type": 0, 00:11:08.604 "assigned_rate_limits": { 00:11:08.604 "rw_ios_per_sec": 0, 00:11:08.604 "rw_mbytes_per_sec": 0, 00:11:08.604 "r_mbytes_per_sec": 0, 00:11:08.604 "w_mbytes_per_sec": 0 00:11:08.604 }, 00:11:08.604 "claimed": false, 00:11:08.604 "zoned": false, 00:11:08.604 "supported_io_types": { 00:11:08.604 "read": true, 00:11:08.604 "write": true, 00:11:08.604 "unmap": true, 00:11:08.604 "write_zeroes": true, 00:11:08.604 "flush": true, 00:11:08.604 "reset": true, 00:11:08.604 "compare": true, 00:11:08.604 "compare_and_write": false, 00:11:08.604 "abort": true, 00:11:08.604 "nvme_admin": false, 00:11:08.604 "nvme_io": false 00:11:08.604 }, 00:11:08.604 "driver_specific": { 00:11:08.604 "gpt": { 00:11:08.604 "base_bdev": "Nvme0n1", 00:11:08.604 "offset_blocks": 774400, 00:11:08.604 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:11:08.604 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:08.604 "partition_name": "SPDK_TEST_second" 00:11:08.604 } 00:11:08.604 } 00:11:08.604 } 00:11:08.604 ]' 00:11:08.604 15:36:30 -- bdev/blockdev.sh@625 -- # jq -r length 00:11:08.604 15:36:30 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:11:08.604 15:36:30 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:11:08.862 15:36:30 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:08.862 15:36:30 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:08.862 15:36:30 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:08.862 15:36:30 -- bdev/blockdev.sh@629 -- # killprocess 63507 00:11:08.862 15:36:30 -- common/autotest_common.sh@926 -- # '[' -z 63507 ']' 00:11:08.862 15:36:30 -- common/autotest_common.sh@930 -- # kill -0 63507 00:11:08.862 15:36:30 -- common/autotest_common.sh@931 -- # uname 00:11:08.862 15:36:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:08.862 15:36:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63507 00:11:08.862 15:36:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:08.862 killing process with pid 63507 00:11:08.862 15:36:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:08.862 15:36:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63507' 00:11:08.862 15:36:30 -- common/autotest_common.sh@945 -- # kill 63507 00:11:08.862 15:36:30 -- common/autotest_common.sh@950 -- # wait 63507 00:11:10.761 00:11:10.761 real 0m4.525s 00:11:10.761 user 0m5.080s 00:11:10.761 sys 0m0.454s 00:11:10.761 ************************************ 00:11:10.761 END TEST bdev_gpt_uuid 00:11:10.761 ************************************ 00:11:10.761 15:36:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.761 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:11:11.019 15:36:32 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:11:11.019 15:36:32 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:11:11.019 15:36:32 -- bdev/blockdev.sh@809 -- # cleanup 00:11:11.019 15:36:32 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:11.019 15:36:32 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:11.019 15:36:32 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:11:11.019 15:36:32 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:11:11.019 15:36:32 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:11:11.019 15:36:32 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:11.277 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:11.534 Waiting for block devices as requested 00:11:11.534 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:11.534 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:11.534 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:11.793 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.059 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:17.059 15:36:38 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:11:17.059 15:36:38 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:11:17.059 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:17.059 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:11:17.059 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:17.059 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:11:17.059 15:36:38 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:11:17.059 ************************************ 00:11:17.059 END TEST blockdev_nvme_gpt 00:11:17.059 ************************************ 00:11:17.059 00:11:17.059 real 1m6.530s 00:11:17.059 user 1m26.738s 00:11:17.059 sys 0m9.402s 00:11:17.059 15:36:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.059 15:36:38 -- common/autotest_common.sh@10 -- # set +x 00:11:17.059 15:36:38 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:17.059 15:36:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:17.059 15:36:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:17.059 15:36:38 -- common/autotest_common.sh@10 -- # set +x 00:11:17.059 ************************************ 00:11:17.059 START TEST nvme 00:11:17.059 ************************************ 00:11:17.059 15:36:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:17.317 * Looking for test storage... 00:11:17.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:17.317 15:36:38 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:18.249 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:18.249 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:18.249 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:18.249 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:18.507 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:18.507 15:36:39 -- nvme/nvme.sh@79 -- # uname 00:11:18.507 15:36:39 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:11:18.507 15:36:39 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:11:18.507 15:36:39 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:11:18.507 15:36:39 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:11:18.507 15:36:39 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:11:18.507 15:36:39 -- common/autotest_common.sh@1045 -- # echo 0 00:11:18.507 15:36:39 -- common/autotest_common.sh@1047 -- # stubpid=64179 00:11:18.507 Waiting for stub to ready for secondary processes... 00:11:18.507 15:36:39 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:11:18.507 15:36:39 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:11:18.507 15:36:39 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:18.507 15:36:39 -- common/autotest_common.sh@1051 -- # [[ -e /proc/64179 ]] 00:11:18.507 15:36:39 -- common/autotest_common.sh@1052 -- # sleep 1s 00:11:18.507 [2024-07-24 15:36:40.008382] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:18.507 [2024-07-24 15:36:40.008549] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.456 [2024-07-24 15:36:40.792217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:19.456 15:36:40 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:19.456 15:36:40 -- common/autotest_common.sh@1051 -- # [[ -e /proc/64179 ]] 00:11:19.456 15:36:40 -- common/autotest_common.sh@1052 -- # sleep 1s 00:11:19.456 [2024-07-24 15:36:41.028906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.456 [2024-07-24 15:36:41.029033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.456 [2024-07-24 15:36:41.029044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.714 [2024-07-24 15:36:41.054346] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:19.714 [2024-07-24 15:36:41.066641] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:11:19.714 [2024-07-24 15:36:41.066915] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:19.714 [2024-07-24 15:36:41.078354] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:19.714 [2024-07-24 15:36:41.078559] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:11:19.714 [2024-07-24 15:36:41.078714] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:11:19.714 [2024-07-24 15:36:41.087728] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:19.714 [2024-07-24 15:36:41.087927] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:11:19.714 [2024-07-24 15:36:41.088078] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:11:19.714 [2024-07-24 15:36:41.097585] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:19.714 [2024-07-24 15:36:41.097840] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:11:19.714 [2024-07-24 15:36:41.098006] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:11:19.714 [2024-07-24 15:36:41.098156] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:11:19.714 [2024-07-24 15:36:41.098345] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:11:20.650 15:36:41 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:20.650 done. 00:11:20.650 15:36:41 -- common/autotest_common.sh@1054 -- # echo done. 00:11:20.650 15:36:41 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:20.650 15:36:41 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:11:20.650 15:36:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:20.650 15:36:41 -- common/autotest_common.sh@10 -- # set +x 00:11:20.650 ************************************ 00:11:20.650 START TEST nvme_reset 00:11:20.650 ************************************ 00:11:20.650 15:36:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:20.908 Initializing NVMe Controllers 00:11:20.908 Skipping QEMU NVMe SSD at 0000:00:06.0 00:11:20.908 Skipping QEMU NVMe SSD at 0000:00:07.0 00:11:20.908 Skipping QEMU NVMe SSD at 0000:00:09.0 00:11:20.908 Skipping QEMU NVMe SSD at 0000:00:08.0 00:11:20.908 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:20.908 00:11:20.908 real 0m0.300s 00:11:20.908 user 0m0.107s 00:11:20.908 sys 0m0.146s 00:11:20.908 15:36:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.908 15:36:42 -- common/autotest_common.sh@10 -- # set +x 00:11:20.908 ************************************ 00:11:20.908 END TEST nvme_reset 00:11:20.908 ************************************ 00:11:20.908 15:36:42 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:20.908 15:36:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:20.908 15:36:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:20.908 15:36:42 -- common/autotest_common.sh@10 -- # set +x 00:11:20.908 ************************************ 00:11:20.908 START TEST nvme_identify 00:11:20.908 ************************************ 00:11:20.908 15:36:42 -- common/autotest_common.sh@1104 -- # nvme_identify 00:11:20.908 15:36:42 -- nvme/nvme.sh@12 -- # bdfs=() 00:11:20.908 15:36:42 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:20.908 15:36:42 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:20.908 15:36:42 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:20.908 15:36:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:20.908 15:36:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:20.908 15:36:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:20.908 15:36:42 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:20.908 15:36:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:20.908 15:36:42 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:20.908 15:36:42 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:20.908 15:36:42 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:21.170 [2024-07-24 15:36:42.617851] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 64222 terminated unexpected 00:11:21.170 ===================================================== 00:11:21.170 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:21.170 ===================================================== 00:11:21.170 Controller Capabilities/Features 00:11:21.170 ================================ 00:11:21.170 Vendor ID: 1b36 00:11:21.170 Subsystem Vendor ID: 1af4 00:11:21.170 Serial Number: 12340 00:11:21.170 Model Number: QEMU NVMe Ctrl 00:11:21.170 Firmware Version: 8.0.0 00:11:21.170 Recommended Arb Burst: 6 00:11:21.170 IEEE OUI Identifier: 00 54 52 00:11:21.170 Multi-path I/O 00:11:21.170 May have multiple subsystem ports: No 00:11:21.170 May have multiple controllers: No 00:11:21.170 Associated with SR-IOV VF: No 00:11:21.170 Max Data Transfer Size: 524288 00:11:21.170 Max Number of Namespaces: 256 00:11:21.170 Max Number of I/O Queues: 64 00:11:21.170 NVMe Specification Version (VS): 1.4 00:11:21.170 NVMe Specification Version (Identify): 1.4 00:11:21.170 Maximum Queue Entries: 2048 00:11:21.170 Contiguous Queues Required: Yes 00:11:21.170 Arbitration Mechanisms Supported 00:11:21.170 Weighted Round Robin: Not Supported 00:11:21.170 Vendor Specific: Not Supported 00:11:21.170 Reset Timeout: 7500 ms 00:11:21.170 Doorbell Stride: 4 bytes 00:11:21.170 NVM Subsystem Reset: Not Supported 00:11:21.170 Command Sets Supported 00:11:21.170 NVM Command Set: Supported 00:11:21.170 Boot Partition: Not Supported 00:11:21.170 Memory Page Size Minimum: 4096 bytes 00:11:21.170 Memory Page Size Maximum: 65536 bytes 00:11:21.170 Persistent Memory Region: Not Supported 00:11:21.170 Optional Asynchronous Events Supported 00:11:21.170 Namespace Attribute Notices: Supported 00:11:21.170 Firmware Activation Notices: Not Supported 00:11:21.170 ANA Change Notices: Not Supported 00:11:21.170 PLE Aggregate Log Change Notices: Not Supported 00:11:21.170 LBA Status Info Alert Notices: Not Supported 00:11:21.170 EGE Aggregate Log Change Notices: Not Supported 00:11:21.170 Normal NVM Subsystem Shutdown event: Not Supported 00:11:21.170 Zone Descriptor Change Notices: Not Supported 00:11:21.170 Discovery Log Change Notices: Not Supported 00:11:21.170 Controller Attributes 00:11:21.170 128-bit Host Identifier: Not Supported 00:11:21.170 Non-Operational Permissive Mode: Not Supported 00:11:21.170 NVM Sets: Not Supported 00:11:21.170 Read Recovery Levels: Not Supported 00:11:21.170 Endurance Groups: Not Supported 00:11:21.170 Predictable Latency Mode: Not Supported 00:11:21.170 Traffic Based Keep ALive: Not Supported 00:11:21.170 Namespace Granularity: Not Supported 00:11:21.170 SQ Associations: Not Supported 00:11:21.170 UUID List: Not Supported 00:11:21.170 Multi-Domain Subsystem: Not Supported 00:11:21.170 Fixed Capacity Management: Not Supported 00:11:21.170 Variable Capacity Management: Not Supported 00:11:21.170 Delete Endurance Group: Not Supported 00:11:21.170 Delete NVM Set: Not Supported 00:11:21.170 Extended LBA Formats Supported: Supported 00:11:21.170 Flexible Data Placement Supported: Not Supported 00:11:21.170 00:11:21.170 Controller Memory Buffer Support 00:11:21.170 ================================ 00:11:21.170 Supported: No 00:11:21.170 00:11:21.170 Persistent Memory Region Support 00:11:21.170 ================================ 00:11:21.170 Supported: No 00:11:21.170 00:11:21.170 Admin Command Set Attributes 00:11:21.170 ============================ 00:11:21.170 Security Send/Receive: Not Supported 00:11:21.170 Format NVM: Supported 00:11:21.170 Firmware Activate/Download: Not Supported 00:11:21.170 Namespace Management: Supported 00:11:21.170 Device Self-Test: Not Supported 00:11:21.170 Directives: Supported 00:11:21.170 NVMe-MI: Not Supported 00:11:21.170 Virtualization Management: Not Supported 00:11:21.170 Doorbell Buffer Config: Supported 00:11:21.170 Get LBA Status Capability: Not Supported 00:11:21.170 Command & Feature Lockdown Capability: Not Supported 00:11:21.170 Abort Command Limit: 4 00:11:21.170 Async Event Request Limit: 4 00:11:21.170 Number of Firmware Slots: N/A 00:11:21.170 Firmware Slot 1 Read-Only: N/A 00:11:21.170 Firmware Activation Without Reset: N/A 00:11:21.170 Multiple Update Detection Support: N/A 00:11:21.170 Firmware Update Granularity: No Information Provided 00:11:21.170 Per-Namespace SMART Log: Yes 00:11:21.170 Asymmetric Namespace Access Log Page: Not Supported 00:11:21.170 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:21.170 Command Effects Log Page: Supported 00:11:21.170 Get Log Page Extended Data: Supported 00:11:21.170 Telemetry Log Pages: Not Supported 00:11:21.170 Persistent Event Log Pages: Not Supported 00:11:21.170 Supported Log Pages Log Page: May Support 00:11:21.170 Commands Supported & Effects Log Page: Not Supported 00:11:21.170 Feature Identifiers & Effects Log Page:May Support 00:11:21.170 NVMe-MI Commands & Effects Log Page: May Support 00:11:21.170 Data Area 4 for Telemetry Log: Not Supported 00:11:21.170 Error Log Page Entries Supported: 1 00:11:21.170 Keep Alive: Not Supported 00:11:21.170 00:11:21.170 NVM Command Set Attributes 00:11:21.170 ========================== 00:11:21.170 Submission Queue Entry Size 00:11:21.170 Max: 64 00:11:21.170 Min: 64 00:11:21.170 Completion Queue Entry Size 00:11:21.170 Max: 16 00:11:21.170 Min: 16 00:11:21.170 Number of Namespaces: 256 00:11:21.170 Compare Command: Supported 00:11:21.170 Write Uncorrectable Command: Not Supported 00:11:21.170 Dataset Management Command: Supported 00:11:21.170 Write Zeroes Command: Supported 00:11:21.170 Set Features Save Field: Supported 00:11:21.170 Reservations: Not Supported 00:11:21.170 Timestamp: Supported 00:11:21.170 Copy: Supported 00:11:21.170 Volatile Write Cache: Present 00:11:21.170 Atomic Write Unit (Normal): 1 00:11:21.170 Atomic Write Unit (PFail): 1 00:11:21.170 Atomic Compare & Write Unit: 1 00:11:21.170 Fused Compare & Write: Not Supported 00:11:21.170 Scatter-Gather List 00:11:21.170 SGL Command Set: Supported 00:11:21.170 SGL Keyed: Not Supported 00:11:21.170 SGL Bit Bucket Descriptor: Not Supported 00:11:21.170 SGL Metadata Pointer: Not Supported 00:11:21.170 Oversized SGL: Not Supported 00:11:21.170 SGL Metadata Address: Not Supported 00:11:21.170 SGL Offset: Not Supported 00:11:21.170 Transport SGL Data Block: Not Supported 00:11:21.170 Replay Protected Memory Block: Not Supported 00:11:21.170 00:11:21.170 Firmware Slot Information 00:11:21.170 ========================= 00:11:21.170 Active slot: 1 00:11:21.170 Slot 1 Firmware Revision: 1.0 00:11:21.170 00:11:21.170 00:11:21.170 Commands Supported and Effects 00:11:21.170 ============================== 00:11:21.170 Admin Commands 00:11:21.170 -------------- 00:11:21.170 Delete I/O Submission Queue (00h): Supported 00:11:21.170 Create I/O Submission Queue (01h): Supported 00:11:21.170 Get Log Page (02h): Supported 00:11:21.170 Delete I/O Completion Queue (04h): Supported 00:11:21.170 Create I/O Completion Queue (05h): Supported 00:11:21.170 Identify (06h): Supported 00:11:21.170 Abort (08h): Supported 00:11:21.170 Set Features (09h): Supported 00:11:21.170 Get Features (0Ah): Supported 00:11:21.170 Asynchronous Event Request (0Ch): Supported 00:11:21.170 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:21.170 Directive Send (19h): Supported 00:11:21.170 Directive Receive (1Ah): Supported 00:11:21.170 Virtualization Management (1Ch): Supported 00:11:21.170 Doorbell Buffer Config (7Ch): Supported 00:11:21.171 Format NVM (80h): Supported LBA-Change 00:11:21.171 I/O Commands 00:11:21.171 ------------ 00:11:21.171 Flush (00h): Supported LBA-Change 00:11:21.171 Write (01h): Supported LBA-Change 00:11:21.171 Read (02h): Supported 00:11:21.171 Compare (05h): Supported 00:11:21.171 Write Zeroes (08h): Supported LBA-Change 00:11:21.171 Dataset Management (09h): Supported LBA-Change 00:11:21.171 Unknown (0Ch): Supported 00:11:21.171 Unknown (12h): Supported 00:11:21.171 Copy (19h): Supported LBA-Change 00:11:21.171 Unknown (1Dh): Supported LBA-Change 00:11:21.171 00:11:21.171 Error Log 00:11:21.171 ========= 00:11:21.171 00:11:21.171 Arbitration 00:11:21.171 =========== 00:11:21.171 Arbitration Burst: no limit 00:11:21.171 00:11:21.171 Power Management 00:11:21.171 ================ 00:11:21.171 Number of Power States: 1 00:11:21.171 Current Power State: Power State #0 00:11:21.171 Power State #0: 00:11:21.171 Max Power: 25.00 W 00:11:21.171 Non-Operational State: Operational 00:11:21.171 Entry Latency: 16 microseconds 00:11:21.171 Exit Latency: 4 microseconds 00:11:21.171 Relative Read Throughput: 0 00:11:21.171 Relative Read Latency: 0 00:11:21.171 Relative Write Throughput: 0 00:11:21.171 Relative Write Latency: 0 00:11:21.171 Idle Power[2024-07-24 15:36:42.619185] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 64222 terminated unexpected 00:11:21.171 : Not Reported 00:11:21.171 Active Power: Not Reported 00:11:21.171 Non-Operational Permissive Mode: Not Supported 00:11:21.171 00:11:21.171 Health Information 00:11:21.171 ================== 00:11:21.171 Critical Warnings: 00:11:21.171 Available Spare Space: OK 00:11:21.171 Temperature: OK 00:11:21.171 Device Reliability: OK 00:11:21.171 Read Only: No 00:11:21.171 Volatile Memory Backup: OK 00:11:21.171 Current Temperature: 323 Kelvin (50 Celsius) 00:11:21.171 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:21.171 Available Spare: 0% 00:11:21.171 Available Spare Threshold: 0% 00:11:21.171 Life Percentage Used: 0% 00:11:21.171 Data Units Read: 1723 00:11:21.171 Data Units Written: 797 00:11:21.171 Host Read Commands: 81088 00:11:21.171 Host Write Commands: 40276 00:11:21.171 Controller Busy Time: 0 minutes 00:11:21.171 Power Cycles: 0 00:11:21.171 Power On Hours: 0 hours 00:11:21.171 Unsafe Shutdowns: 0 00:11:21.171 Unrecoverable Media Errors: 0 00:11:21.171 Lifetime Error Log Entries: 0 00:11:21.171 Warning Temperature Time: 0 minutes 00:11:21.171 Critical Temperature Time: 0 minutes 00:11:21.171 00:11:21.171 Number of Queues 00:11:21.171 ================ 00:11:21.171 Number of I/O Submission Queues: 64 00:11:21.171 Number of I/O Completion Queues: 64 00:11:21.171 00:11:21.171 ZNS Specific Controller Data 00:11:21.171 ============================ 00:11:21.171 Zone Append Size Limit: 0 00:11:21.171 00:11:21.171 00:11:21.171 Active Namespaces 00:11:21.171 ================= 00:11:21.171 Namespace ID:1 00:11:21.171 Error Recovery Timeout: Unlimited 00:11:21.171 Command Set Identifier: NVM (00h) 00:11:21.171 Deallocate: Supported 00:11:21.171 Deallocated/Unwritten Error: Supported 00:11:21.171 Deallocated Read Value: All 0x00 00:11:21.171 Deallocate in Write Zeroes: Not Supported 00:11:21.171 Deallocated Guard Field: 0xFFFF 00:11:21.171 Flush: Supported 00:11:21.171 Reservation: Not Supported 00:11:21.171 Metadata Transferred as: Separate Metadata Buffer 00:11:21.171 Namespace Sharing Capabilities: Private 00:11:21.171 Size (in LBAs): 1548666 (5GiB) 00:11:21.171 Capacity (in LBAs): 1548666 (5GiB) 00:11:21.171 Utilization (in LBAs): 1548666 (5GiB) 00:11:21.171 Thin Provisioning: Not Supported 00:11:21.171 Per-NS Atomic Units: No 00:11:21.171 Maximum Single Source Range Length: 128 00:11:21.171 Maximum Copy Length: 128 00:11:21.171 Maximum Source Range Count: 128 00:11:21.171 NGUID/EUI64 Never Reused: No 00:11:21.171 Namespace Write Protected: No 00:11:21.171 Number of LBA Formats: 8 00:11:21.171 Current LBA Format: LBA Format #07 00:11:21.171 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:21.171 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:21.171 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:21.171 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:21.171 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:21.171 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:21.171 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:21.171 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:21.171 00:11:21.171 ===================================================== 00:11:21.171 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:21.171 ===================================================== 00:11:21.171 Controller Capabilities/Features 00:11:21.171 ================================ 00:11:21.171 Vendor ID: 1b36 00:11:21.171 Subsystem Vendor ID: 1af4 00:11:21.171 Serial Number: 12341 00:11:21.171 Model Number: QEMU NVMe Ctrl 00:11:21.171 Firmware Version: 8.0.0 00:11:21.171 Recommended Arb Burst: 6 00:11:21.171 IEEE OUI Identifier: 00 54 52 00:11:21.171 Multi-path I/O 00:11:21.171 May have multiple subsystem ports: No 00:11:21.171 May have multiple controllers: No 00:11:21.171 Associated with SR-IOV VF: No 00:11:21.171 Max Data Transfer Size: 524288 00:11:21.171 Max Number of Namespaces: 256 00:11:21.171 Max Number of I/O Queues: 64 00:11:21.171 NVMe Specification Version (VS): 1.4 00:11:21.171 NVMe Specification Version (Identify): 1.4 00:11:21.171 Maximum Queue Entries: 2048 00:11:21.171 Contiguous Queues Required: Yes 00:11:21.171 Arbitration Mechanisms Supported 00:11:21.171 Weighted Round Robin: Not Supported 00:11:21.171 Vendor Specific: Not Supported 00:11:21.171 Reset Timeout: 7500 ms 00:11:21.171 Doorbell Stride: 4 bytes 00:11:21.171 NVM Subsystem Reset: Not Supported 00:11:21.171 Command Sets Supported 00:11:21.171 NVM Command Set: Supported 00:11:21.171 Boot Partition: Not Supported 00:11:21.171 Memory Page Size Minimum: 4096 bytes 00:11:21.171 Memory Page Size Maximum: 65536 bytes 00:11:21.171 Persistent Memory Region: Not Supported 00:11:21.171 Optional Asynchronous Events Supported 00:11:21.171 Namespace Attribute Notices: Supported 00:11:21.171 Firmware Activation Notices: Not Supported 00:11:21.171 ANA Change Notices: Not Supported 00:11:21.171 PLE Aggregate Log Change Notices: Not Supported 00:11:21.171 LBA Status Info Alert Notices: Not Supported 00:11:21.171 EGE Aggregate Log Change Notices: Not Supported 00:11:21.171 Normal NVM Subsystem Shutdown event: Not Supported 00:11:21.171 Zone Descriptor Change Notices: Not Supported 00:11:21.171 Discovery Log Change Notices: Not Supported 00:11:21.171 Controller Attributes 00:11:21.171 128-bit Host Identifier: Not Supported 00:11:21.171 Non-Operational Permissive Mode: Not Supported 00:11:21.171 NVM Sets: Not Supported 00:11:21.171 Read Recovery Levels: Not Supported 00:11:21.171 Endurance Groups: Not Supported 00:11:21.171 Predictable Latency Mode: Not Supported 00:11:21.171 Traffic Based Keep ALive: Not Supported 00:11:21.171 Namespace Granularity: Not Supported 00:11:21.171 SQ Associations: Not Supported 00:11:21.171 UUID List: Not Supported 00:11:21.171 Multi-Domain Subsystem: Not Supported 00:11:21.171 Fixed Capacity Management: Not Supported 00:11:21.171 Variable Capacity Management: Not Supported 00:11:21.171 Delete Endurance Group: Not Supported 00:11:21.171 Delete NVM Set: Not Supported 00:11:21.171 Extended LBA Formats Supported: Supported 00:11:21.171 Flexible Data Placement Supported: Not Supported 00:11:21.171 00:11:21.171 Controller Memory Buffer Support 00:11:21.171 ================================ 00:11:21.171 Supported: No 00:11:21.172 00:11:21.172 Persistent Memory Region Support 00:11:21.172 ================================ 00:11:21.172 Supported: No 00:11:21.172 00:11:21.172 Admin Command Set Attributes 00:11:21.172 ============================ 00:11:21.172 Security Send/Receive: Not Supported 00:11:21.172 Format NVM: Supported 00:11:21.172 Firmware Activate/Download: Not Supported 00:11:21.172 Namespace Management: Supported 00:11:21.172 Device Self-Test: Not Supported 00:11:21.172 Directives: Supported 00:11:21.172 NVMe-MI: Not Supported 00:11:21.172 Virtualization Management: Not Supported 00:11:21.172 Doorbell Buffer Config: Supported 00:11:21.172 Get LBA Status Capability: Not Supported 00:11:21.172 Command & Feature Lockdown Capability: Not Supported 00:11:21.172 Abort Command Limit: 4 00:11:21.172 Async Event Request Limit: 4 00:11:21.172 Number of Firmware Slots: N/A 00:11:21.172 Firmware Slot 1 Read-Only: N/A 00:11:21.172 Firmware Activation Without Reset: N/A 00:11:21.172 Multiple Update Detection Support: N/A 00:11:21.172 Firmware Update Granularity: No Information Provided 00:11:21.172 Per-Namespace SMART Log: Yes 00:11:21.172 Asymmetric Namespace Access Log Page: Not Supported 00:11:21.172 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:21.172 Command Effects Log Page: Supported 00:11:21.172 Get Log Page Extended Data: Supported 00:11:21.172 Telemetry Log Pages: Not Supported 00:11:21.172 Persistent Event Log Pages: Not Supported 00:11:21.172 Supported Log Pages Log Page: May Support 00:11:21.172 Commands Supported & Effects Log Page: Not Supported 00:11:21.172 Feature Identifiers & Effects Log Page:May Support 00:11:21.172 NVMe-MI Commands & Effects Log Page: May Support 00:11:21.172 Data Area 4 for Telemetry Log: Not Supported 00:11:21.172 Error Log Page Entries Supported: 1 00:11:21.172 Keep Alive: Not Supported 00:11:21.172 00:11:21.172 NVM Command Set Attributes 00:11:21.172 ========================== 00:11:21.172 Submission Queue Entry Size 00:11:21.172 Max: 64 00:11:21.172 Min: 64 00:11:21.172 Completion Queue Entry Size 00:11:21.172 Max: 16 00:11:21.172 Min: 16 00:11:21.172 Number of Namespaces: 256 00:11:21.172 Compare Command: Supported 00:11:21.172 Write Uncorrectable Command: Not Supported 00:11:21.172 Dataset Management Command: Supported 00:11:21.172 Write Zeroes Command: Supported 00:11:21.172 Set Features Save Field: Supported 00:11:21.172 Reservations: Not Supported 00:11:21.172 Timestamp: Supported 00:11:21.172 Copy: Supported 00:11:21.172 Volatile Write Cache: Present 00:11:21.172 Atomic Write Unit (Normal): 1 00:11:21.172 Atomic Write Unit (PFail): 1 00:11:21.172 Atomic Compare & Write Unit: 1 00:11:21.172 Fused Compare & Write: Not Supported 00:11:21.172 Scatter-Gather List 00:11:21.172 SGL Command Set: Supported 00:11:21.172 SGL Keyed: Not Supported 00:11:21.172 SGL Bit Bucket Descriptor: Not Supported 00:11:21.172 SGL Metadata Pointer: Not Supported 00:11:21.172 Oversized SGL: Not Supported 00:11:21.172 SGL Metadata Address: Not Supported 00:11:21.172 SGL Offset: Not Supported 00:11:21.172 Transport SGL Data Block: Not Supported 00:11:21.172 Replay Protected Memory Block: Not Supported 00:11:21.172 00:11:21.172 Firmware Slot Information 00:11:21.172 ========================= 00:11:21.172 Active slot: 1 00:11:21.172 Slot 1 Firmware Revision: 1.0 00:11:21.172 00:11:21.172 00:11:21.172 Commands Supported and Effects 00:11:21.172 ============================== 00:11:21.172 Admin Commands 00:11:21.172 -------------- 00:11:21.172 Delete I/O Submission Queue (00h): Supported 00:11:21.172 Create I/O Submission Queue (01h): Supported 00:11:21.172 Get Log Page (02h): Supported 00:11:21.172 Delete I/O Completion Queue (04h): Supported 00:11:21.172 Create I/O Completion Queue (05h): Supported 00:11:21.172 Identify (06h): Supported 00:11:21.172 Abort (08h): Supported 00:11:21.172 Set Features (09h): Supported 00:11:21.172 Get Features (0Ah): Supported 00:11:21.172 Asynchronous Event Request (0Ch): Supported 00:11:21.172 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:21.172 Directive Send (19h): Supported 00:11:21.172 Directive Receive (1Ah): Supported 00:11:21.172 Virtualization Management (1Ch): Supported 00:11:21.172 Doorbell Buffer Config (7Ch): Supported 00:11:21.172 Format NVM (80h): Supported LBA-Change 00:11:21.172 I/O Commands 00:11:21.172 ------------ 00:11:21.172 Flush (00h): Supported LBA-Change 00:11:21.172 Write (01h): Supported LBA-Change 00:11:21.172 Read (02h): Supported 00:11:21.172 Compare (05h): Supported 00:11:21.172 Write Zeroes (08h): Supported LBA-Change 00:11:21.172 Dataset Management (09h): Supported LBA-Change 00:11:21.172 Unknown (0Ch): Supported 00:11:21.172 Unknown (12h): Supported 00:11:21.172 Copy (19h): Supported LBA-Change 00:11:21.172 Unknown (1Dh): Supported LBA-Change 00:11:21.172 00:11:21.172 Error Log 00:11:21.172 ========= 00:11:21.172 00:11:21.172 Arbitration 00:11:21.172 =========== 00:11:21.172 Arbitration Burst: no limit 00:11:21.172 00:11:21.172 Power Management 00:11:21.172 ================ 00:11:21.172 Number of Power States: 1 00:11:21.172 Current Power State: Power State #0 00:11:21.172 Power State #0: 00:11:21.172 Max Power: 25.00 W 00:11:21.172 Non-Operational State: Operational 00:11:21.172 Entry Latency: 16 microseconds 00:11:21.172 Exit Latency: 4 microseconds 00:11:21.172 Relative Read Throughput: 0 00:11:21.172 Relative Read Latency: 0 00:11:21.172 Relative Write Throughput: 0 00:11:21.172 Relative Write Latency: 0 00:11:21.172 Idle Power: Not Reported 00:11:21.172 Active Power: Not Reported 00:11:21.172 Non-Operational Permissive Mode: Not Supported 00:11:21.172 00:11:21.172 Health Information 00:11:21.172 ================== 00:11:21.172 Critical Warnings: 00:11:21.172 Available Spare Space: OK 00:11:21.172 Temperature: OK 00:11:21.172 Device Reliability: OK 00:11:21.172 Read Only: No 00:11:21.172 Volatile Memory Backup: OK 00:11:21.172 Current Temperature: 323 Kelvin (50 Celsius) 00:11:21.172 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:21.172 Available Spare: 0% 00:11:21.172 Available Spare Threshold: 0% 00:11:21.172 Life Percentage Used: 0% 00:11:21.172 Data Units Read: 1182 00:11:21.172 Data Units Written: 548 00:11:21.172 Host Read Commands: 56511 00:11:21.172 Host Write Commands: 27780 00:11:21.172 Controller Busy Time: 0 minutes 00:11:21.172 Power Cycles: 0 00:11:21.172 Power On Hours: 0 hours 00:11:21.172 Unsafe Shutdowns: 0 00:11:21.172 Unrecoverable Media Errors: 0 00:11:21.172 Lifetime Error Log Entries: 0 00:11:21.172 Warning Temperature Time: 0 minutes 00:11:21.172 Critical Temperature Time: 0 minutes 00:11:21.172 00:11:21.172 Number of Queues 00:11:21.172 ================ 00:11:21.172 Number of I/O Submission Queues: 64 00:11:21.172 Number of I/O Completion Queues: 64 00:11:21.172 00:11:21.172 ZNS Specific Controller Data 00:11:21.172 ============================ 00:11:21.172 Zone Append Size Limit: 0 00:11:21.172 00:11:21.172 00:11:21.172 Active Namespaces 00:11:21.172 ================= 00:11:21.172 Namespace ID:1 00:11:21.172 Error Recovery Timeout: Unlimited 00:11:21.172 Command Set Identifier: [2024-07-24 15:36:42.620275] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 64222 terminated unexpected 00:11:21.172 NVM (00h) 00:11:21.172 Deallocate: Supported 00:11:21.172 Deallocated/Unwritten Error: Supported 00:11:21.172 Deallocated Read Value: All 0x00 00:11:21.172 Deallocate in Write Zeroes: Not Supported 00:11:21.172 Deallocated Guard Field: 0xFFFF 00:11:21.172 Flush: Supported 00:11:21.172 Reservation: Not Supported 00:11:21.172 Namespace Sharing Capabilities: Private 00:11:21.172 Size (in LBAs): 1310720 (5GiB) 00:11:21.172 Capacity (in LBAs): 1310720 (5GiB) 00:11:21.172 Utilization (in LBAs): 1310720 (5GiB) 00:11:21.172 Thin Provisioning: Not Supported 00:11:21.172 Per-NS Atomic Units: No 00:11:21.172 Maximum Single Source Range Length: 128 00:11:21.172 Maximum Copy Length: 128 00:11:21.172 Maximum Source Range Count: 128 00:11:21.172 NGUID/EUI64 Never Reused: No 00:11:21.172 Namespace Write Protected: No 00:11:21.172 Number of LBA Formats: 8 00:11:21.172 Current LBA Format: LBA Format #04 00:11:21.172 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:21.172 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:21.172 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:21.172 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:21.172 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:21.173 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:21.173 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:21.173 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:21.173 00:11:21.173 ===================================================== 00:11:21.173 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:21.173 ===================================================== 00:11:21.173 Controller Capabilities/Features 00:11:21.173 ================================ 00:11:21.173 Vendor ID: 1b36 00:11:21.173 Subsystem Vendor ID: 1af4 00:11:21.173 Serial Number: 12343 00:11:21.173 Model Number: QEMU NVMe Ctrl 00:11:21.173 Firmware Version: 8.0.0 00:11:21.173 Recommended Arb Burst: 6 00:11:21.173 IEEE OUI Identifier: 00 54 52 00:11:21.173 Multi-path I/O 00:11:21.173 May have multiple subsystem ports: No 00:11:21.173 May have multiple controllers: Yes 00:11:21.173 Associated with SR-IOV VF: No 00:11:21.173 Max Data Transfer Size: 524288 00:11:21.173 Max Number of Namespaces: 256 00:11:21.173 Max Number of I/O Queues: 64 00:11:21.173 NVMe Specification Version (VS): 1.4 00:11:21.173 NVMe Specification Version (Identify): 1.4 00:11:21.173 Maximum Queue Entries: 2048 00:11:21.173 Contiguous Queues Required: Yes 00:11:21.173 Arbitration Mechanisms Supported 00:11:21.173 Weighted Round Robin: Not Supported 00:11:21.173 Vendor Specific: Not Supported 00:11:21.173 Reset Timeout: 7500 ms 00:11:21.173 Doorbell Stride: 4 bytes 00:11:21.173 NVM Subsystem Reset: Not Supported 00:11:21.173 Command Sets Supported 00:11:21.173 NVM Command Set: Supported 00:11:21.173 Boot Partition: Not Supported 00:11:21.173 Memory Page Size Minimum: 4096 bytes 00:11:21.173 Memory Page Size Maximum: 65536 bytes 00:11:21.173 Persistent Memory Region: Not Supported 00:11:21.173 Optional Asynchronous Events Supported 00:11:21.173 Namespace Attribute Notices: Supported 00:11:21.173 Firmware Activation Notices: Not Supported 00:11:21.173 ANA Change Notices: Not Supported 00:11:21.173 PLE Aggregate Log Change Notices: Not Supported 00:11:21.173 LBA Status Info Alert Notices: Not Supported 00:11:21.173 EGE Aggregate Log Change Notices: Not Supported 00:11:21.173 Normal NVM Subsystem Shutdown event: Not Supported 00:11:21.173 Zone Descriptor Change Notices: Not Supported 00:11:21.173 Discovery Log Change Notices: Not Supported 00:11:21.173 Controller Attributes 00:11:21.173 128-bit Host Identifier: Not Supported 00:11:21.173 Non-Operational Permissive Mode: Not Supported 00:11:21.173 NVM Sets: Not Supported 00:11:21.173 Read Recovery Levels: Not Supported 00:11:21.173 Endurance Groups: Supported 00:11:21.173 Predictable Latency Mode: Not Supported 00:11:21.173 Traffic Based Keep ALive: Not Supported 00:11:21.173 Namespace Granularity: Not Supported 00:11:21.173 SQ Associations: Not Supported 00:11:21.173 UUID List: Not Supported 00:11:21.173 Multi-Domain Subsystem: Not Supported 00:11:21.173 Fixed Capacity Management: Not Supported 00:11:21.173 Variable Capacity Management: Not Supported 00:11:21.173 Delete Endurance Group: Not Supported 00:11:21.173 Delete NVM Set: Not Supported 00:11:21.173 Extended LBA Formats Supported: Supported 00:11:21.173 Flexible Data Placement Supported: Supported 00:11:21.173 00:11:21.173 Controller Memory Buffer Support 00:11:21.173 ================================ 00:11:21.173 Supported: No 00:11:21.173 00:11:21.173 Persistent Memory Region Support 00:11:21.173 ================================ 00:11:21.173 Supported: No 00:11:21.173 00:11:21.173 Admin Command Set Attributes 00:11:21.173 ============================ 00:11:21.173 Security Send/Receive: Not Supported 00:11:21.173 Format NVM: Supported 00:11:21.173 Firmware Activate/Download: Not Supported 00:11:21.173 Namespace Management: Supported 00:11:21.173 Device Self-Test: Not Supported 00:11:21.173 Directives: Supported 00:11:21.173 NVMe-MI: Not Supported 00:11:21.173 Virtualization Management: Not Supported 00:11:21.173 Doorbell Buffer Config: Supported 00:11:21.173 Get LBA Status Capability: Not Supported 00:11:21.173 Command & Feature Lockdown Capability: Not Supported 00:11:21.173 Abort Command Limit: 4 00:11:21.173 Async Event Request Limit: 4 00:11:21.173 Number of Firmware Slots: N/A 00:11:21.173 Firmware Slot 1 Read-Only: N/A 00:11:21.173 Firmware Activation Without Reset: N/A 00:11:21.173 Multiple Update Detection Support: N/A 00:11:21.173 Firmware Update Granularity: No Information Provided 00:11:21.173 Per-Namespace SMART Log: Yes 00:11:21.173 Asymmetric Namespace Access Log Page: Not Supported 00:11:21.173 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:21.173 Command Effects Log Page: Supported 00:11:21.173 Get Log Page Extended Data: Supported 00:11:21.173 Telemetry Log Pages: Not Supported 00:11:21.173 Persistent Event Log Pages: Not Supported 00:11:21.173 Supported Log Pages Log Page: May Support 00:11:21.173 Commands Supported & Effects Log Page: Not Supported 00:11:21.173 Feature Identifiers & Effects Log Page:May Support 00:11:21.173 NVMe-MI Commands & Effects Log Page: May Support 00:11:21.173 Data Area 4 for Telemetry Log: Not Supported 00:11:21.173 Error Log Page Entries Supported: 1 00:11:21.173 Keep Alive: Not Supported 00:11:21.173 00:11:21.173 NVM Command Set Attributes 00:11:21.173 ========================== 00:11:21.173 Submission Queue Entry Size 00:11:21.173 Max: 64 00:11:21.173 Min: 64 00:11:21.173 Completion Queue Entry Size 00:11:21.173 Max: 16 00:11:21.173 Min: 16 00:11:21.173 Number of Namespaces: 256 00:11:21.173 Compare Command: Supported 00:11:21.173 Write Uncorrectable Command: Not Supported 00:11:21.173 Dataset Management Command: Supported 00:11:21.173 Write Zeroes Command: Supported 00:11:21.173 Set Features Save Field: Supported 00:11:21.173 Reservations: Not Supported 00:11:21.173 Timestamp: Supported 00:11:21.173 Copy: Supported 00:11:21.173 Volatile Write Cache: Present 00:11:21.173 Atomic Write Unit (Normal): 1 00:11:21.173 Atomic Write Unit (PFail): 1 00:11:21.173 Atomic Compare & Write Unit: 1 00:11:21.173 Fused Compare & Write: Not Supported 00:11:21.173 Scatter-Gather List 00:11:21.173 SGL Command Set: Supported 00:11:21.173 SGL Keyed: Not Supported 00:11:21.173 SGL Bit Bucket Descriptor: Not Supported 00:11:21.173 SGL Metadata Pointer: Not Supported 00:11:21.173 Oversized SGL: Not Supported 00:11:21.173 SGL Metadata Address: Not Supported 00:11:21.173 SGL Offset: Not Supported 00:11:21.173 Transport SGL Data Block: Not Supported 00:11:21.173 Replay Protected Memory Block: Not Supported 00:11:21.173 00:11:21.173 Firmware Slot Information 00:11:21.173 ========================= 00:11:21.173 Active slot: 1 00:11:21.173 Slot 1 Firmware Revision: 1.0 00:11:21.173 00:11:21.173 00:11:21.173 Commands Supported and Effects 00:11:21.173 ============================== 00:11:21.173 Admin Commands 00:11:21.173 -------------- 00:11:21.173 Delete I/O Submission Queue (00h): Supported 00:11:21.173 Create I/O Submission Queue (01h): Supported 00:11:21.173 Get Log Page (02h): Supported 00:11:21.173 Delete I/O Completion Queue (04h): Supported 00:11:21.173 Create I/O Completion Queue (05h): Supported 00:11:21.173 Identify (06h): Supported 00:11:21.173 Abort (08h): Supported 00:11:21.173 Set Features (09h): Supported 00:11:21.173 Get Features (0Ah): Supported 00:11:21.173 Asynchronous Event Request (0Ch): Supported 00:11:21.173 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:21.173 Directive Send (19h): Supported 00:11:21.173 Directive Receive (1Ah): Supported 00:11:21.173 Virtualization Management (1Ch): Supported 00:11:21.173 Doorbell Buffer Config (7Ch): Supported 00:11:21.173 Format NVM (80h): Supported LBA-Change 00:11:21.173 I/O Commands 00:11:21.173 ------------ 00:11:21.173 Flush (00h): Supported LBA-Change 00:11:21.173 Write (01h): Supported LBA-Change 00:11:21.173 Read (02h): Supported 00:11:21.173 Compare (05h): Supported 00:11:21.173 Write Zeroes (08h): Supported LBA-Change 00:11:21.173 Dataset Management (09h): Supported LBA-Change 00:11:21.173 Unknown (0Ch): Supported 00:11:21.173 Unknown (12h): Supported 00:11:21.173 Copy (19h): Supported LBA-Change 00:11:21.173 Unknown (1Dh): Supported LBA-Change 00:11:21.173 00:11:21.173 Error Log 00:11:21.173 ========= 00:11:21.173 00:11:21.173 Arbitration 00:11:21.173 =========== 00:11:21.173 Arbitration Burst: no limit 00:11:21.173 00:11:21.173 Power Management 00:11:21.173 ================ 00:11:21.173 Number of Power States: 1 00:11:21.173 Current Power State: Power State #0 00:11:21.173 Power State #0: 00:11:21.174 Max Power: 25.00 W 00:11:21.174 Non-Operational State: Operational 00:11:21.174 Entry Latency: 16 microseconds 00:11:21.174 Exit Latency: 4 microseconds 00:11:21.174 Relative Read Throughput: 0 00:11:21.174 Relative Read Latency: 0 00:11:21.174 Relative Write Throughput: 0 00:11:21.174 Relative Write Latency: 0 00:11:21.174 Idle Power: Not Reported 00:11:21.174 Active Power: Not Reported 00:11:21.174 Non-Operational Permissive Mode: Not Supported 00:11:21.174 00:11:21.174 Health Information 00:11:21.174 ================== 00:11:21.174 Critical Warnings: 00:11:21.174 Available Spare Space: OK 00:11:21.174 Temperature: OK 00:11:21.174 Device Reliability: OK 00:11:21.174 Read Only: No 00:11:21.174 Volatile Memory Backup: OK 00:11:21.174 Current Temperature: 323 Kelvin (50 Celsius) 00:11:21.174 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:21.174 Available Spare: 0% 00:11:21.174 Available Spare Threshold: 0% 00:11:21.174 Life Percentage Used: 0% 00:11:21.174 Data Units Read: 1246 00:11:21.174 Data Units Written: 576 00:11:21.174 Host Read Commands: 57096 00:11:21.174 Host Write Commands: 28039 00:11:21.174 Controller Busy Time: 0 minutes 00:11:21.174 Power Cycles: 0 00:11:21.174 Power On Hours: 0 hours 00:11:21.174 Unsafe Shutdowns: 0 00:11:21.174 Unrecoverable Media Errors: 0 00:11:21.174 Lifetime Error Log Entries: 0 00:11:21.174 Warning Temperature Time: 0 minutes 00:11:21.174 Critical Temperature Time: 0 minutes 00:11:21.174 00:11:21.174 Number of Queues 00:11:21.174 ================ 00:11:21.174 Number of I/O Submission Queues: 64 00:11:21.174 Number of I/O Completion Queues: 64 00:11:21.174 00:11:21.174 ZNS Specific Controller Data 00:11:21.174 ============================ 00:11:21.174 Zone Append Size Limit: 0 00:11:21.174 00:11:21.174 00:11:21.174 Active Namespaces 00:11:21.174 ================= 00:11:21.174 Namespace ID:1 00:11:21.174 Error Recovery Timeout: Unlimited 00:11:21.174 Command Set Identifier: NVM (00h) 00:11:21.174 Deallocate: Supported 00:11:21.174 Deallocated/Unwritten Error: Supported 00:11:21.174 Deallocated Read Value: All 0x00 00:11:21.174 Deallocate in Write Zeroes: Not Supported 00:11:21.174 Deallocated Guard Field: 0xFFFF 00:11:21.174 Flush: Supported 00:11:21.174 Reservation: Not Supported 00:11:21.174 Namespace Sharing Capabilities: Multiple Controllers 00:11:21.174 Size (in LBAs): 262144 (1GiB) 00:11:21.174 Capacity (in LBAs): 262144 (1GiB) 00:11:21.174 Utilization (in LBAs): 262144 (1GiB) 00:11:21.174 Thin Provisioning: Not Supported 00:11:21.174 Per-NS Atomic Units: No 00:11:21.174 Maximum Single Source Range Length: 128 00:11:21.174 Maximum Copy Length: 128 00:11:21.174 Maximum Source Range Count: 128 00:11:21.174 NGUID/EUI64 Never Reused: No 00:11:21.174 Namespace Write Protected: No 00:11:21.174 Endurance group ID: 1 00:11:21.174 Number of LBA Formats: 8 00:11:21.174 Current LBA Format: LBA Format #04 00:11:21.174 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:21.174 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:21.174 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:21.174 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:21.174 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:21.174 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:21.174 LBA Format #06: Data Si[2024-07-24 15:36:42.622118] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 64222 terminated unexpected 00:11:21.174 ze: 4096 Metadata Size: 16 00:11:21.174 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:21.174 00:11:21.174 Get Feature FDP: 00:11:21.174 ================ 00:11:21.174 Enabled: Yes 00:11:21.174 FDP configuration index: 0 00:11:21.174 00:11:21.174 FDP configurations log page 00:11:21.174 =========================== 00:11:21.174 Number of FDP configurations: 1 00:11:21.174 Version: 0 00:11:21.174 Size: 112 00:11:21.174 FDP Configuration Descriptor: 0 00:11:21.174 Descriptor Size: 96 00:11:21.174 Reclaim Group Identifier format: 2 00:11:21.174 FDP Volatile Write Cache: Not Present 00:11:21.174 FDP Configuration: Valid 00:11:21.174 Vendor Specific Size: 0 00:11:21.174 Number of Reclaim Groups: 2 00:11:21.174 Number of Recalim Unit Handles: 8 00:11:21.174 Max Placement Identifiers: 128 00:11:21.174 Number of Namespaces Suppprted: 256 00:11:21.174 Reclaim unit Nominal Size: 6000000 bytes 00:11:21.174 Estimated Reclaim Unit Time Limit: Not Reported 00:11:21.174 RUH Desc #000: RUH Type: Initially Isolated 00:11:21.174 RUH Desc #001: RUH Type: Initially Isolated 00:11:21.174 RUH Desc #002: RUH Type: Initially Isolated 00:11:21.174 RUH Desc #003: RUH Type: Initially Isolated 00:11:21.174 RUH Desc #004: RUH Type: Initially Isolated 00:11:21.174 RUH Desc #005: RUH Type: Initially Isolated 00:11:21.174 RUH Desc #006: RUH Type: Initially Isolated 00:11:21.174 RUH Desc #007: RUH Type: Initially Isolated 00:11:21.174 00:11:21.174 FDP reclaim unit handle usage log page 00:11:21.174 ====================================== 00:11:21.174 Number of Reclaim Unit Handles: 8 00:11:21.174 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:21.174 RUH Usage Desc #001: RUH Attributes: Unused 00:11:21.174 RUH Usage Desc #002: RUH Attributes: Unused 00:11:21.174 RUH Usage Desc #003: RUH Attributes: Unused 00:11:21.174 RUH Usage Desc #004: RUH Attributes: Unused 00:11:21.174 RUH Usage Desc #005: RUH Attributes: Unused 00:11:21.174 RUH Usage Desc #006: RUH Attributes: Unused 00:11:21.174 RUH Usage Desc #007: RUH Attributes: Unused 00:11:21.174 00:11:21.174 FDP statistics log page 00:11:21.174 ======================= 00:11:21.174 Host bytes with metadata written: 374603776 00:11:21.174 Media bytes with metadata written: 374734848 00:11:21.174 Media bytes erased: 0 00:11:21.174 00:11:21.174 FDP events log page 00:11:21.174 =================== 00:11:21.174 Number of FDP events: 0 00:11:21.174 00:11:21.174 ===================================================== 00:11:21.174 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:21.174 ===================================================== 00:11:21.174 Controller Capabilities/Features 00:11:21.174 ================================ 00:11:21.174 Vendor ID: 1b36 00:11:21.174 Subsystem Vendor ID: 1af4 00:11:21.174 Serial Number: 12342 00:11:21.174 Model Number: QEMU NVMe Ctrl 00:11:21.174 Firmware Version: 8.0.0 00:11:21.174 Recommended Arb Burst: 6 00:11:21.174 IEEE OUI Identifier: 00 54 52 00:11:21.174 Multi-path I/O 00:11:21.174 May have multiple subsystem ports: No 00:11:21.174 May have multiple controllers: No 00:11:21.174 Associated with SR-IOV VF: No 00:11:21.174 Max Data Transfer Size: 524288 00:11:21.174 Max Number of Namespaces: 256 00:11:21.174 Max Number of I/O Queues: 64 00:11:21.174 NVMe Specification Version (VS): 1.4 00:11:21.174 NVMe Specification Version (Identify): 1.4 00:11:21.174 Maximum Queue Entries: 2048 00:11:21.174 Contiguous Queues Required: Yes 00:11:21.174 Arbitration Mechanisms Supported 00:11:21.174 Weighted Round Robin: Not Supported 00:11:21.174 Vendor Specific: Not Supported 00:11:21.174 Reset Timeout: 7500 ms 00:11:21.174 Doorbell Stride: 4 bytes 00:11:21.174 NVM Subsystem Reset: Not Supported 00:11:21.174 Command Sets Supported 00:11:21.174 NVM Command Set: Supported 00:11:21.174 Boot Partition: Not Supported 00:11:21.174 Memory Page Size Minimum: 4096 bytes 00:11:21.174 Memory Page Size Maximum: 65536 bytes 00:11:21.174 Persistent Memory Region: Not Supported 00:11:21.174 Optional Asynchronous Events Supported 00:11:21.174 Namespace Attribute Notices: Supported 00:11:21.174 Firmware Activation Notices: Not Supported 00:11:21.174 ANA Change Notices: Not Supported 00:11:21.174 PLE Aggregate Log Change Notices: Not Supported 00:11:21.174 LBA Status Info Alert Notices: Not Supported 00:11:21.174 EGE Aggregate Log Change Notices: Not Supported 00:11:21.174 Normal NVM Subsystem Shutdown event: Not Supported 00:11:21.174 Zone Descriptor Change Notices: Not Supported 00:11:21.174 Discovery Log Change Notices: Not Supported 00:11:21.174 Controller Attributes 00:11:21.174 128-bit Host Identifier: Not Supported 00:11:21.174 Non-Operational Permissive Mode: Not Supported 00:11:21.174 NVM Sets: Not Supported 00:11:21.174 Read Recovery Levels: Not Supported 00:11:21.174 Endurance Groups: Not Supported 00:11:21.174 Predictable Latency Mode: Not Supported 00:11:21.174 Traffic Based Keep ALive: Not Supported 00:11:21.174 Namespace Granularity: Not Supported 00:11:21.174 SQ Associations: Not Supported 00:11:21.174 UUID List: Not Supported 00:11:21.174 Multi-Domain Subsystem: Not Supported 00:11:21.174 Fixed Capacity Management: Not Supported 00:11:21.174 Variable Capacity Management: Not Supported 00:11:21.175 Delete Endurance Group: Not Supported 00:11:21.175 Delete NVM Set: Not Supported 00:11:21.175 Extended LBA Formats Supported: Supported 00:11:21.175 Flexible Data Placement Supported: Not Supported 00:11:21.175 00:11:21.175 Controller Memory Buffer Support 00:11:21.175 ================================ 00:11:21.175 Supported: No 00:11:21.175 00:11:21.175 Persistent Memory Region Support 00:11:21.175 ================================ 00:11:21.175 Supported: No 00:11:21.175 00:11:21.175 Admin Command Set Attributes 00:11:21.175 ============================ 00:11:21.175 Security Send/Receive: Not Supported 00:11:21.175 Format NVM: Supported 00:11:21.175 Firmware Activate/Download: Not Supported 00:11:21.175 Namespace Management: Supported 00:11:21.175 Device Self-Test: Not Supported 00:11:21.175 Directives: Supported 00:11:21.175 NVMe-MI: Not Supported 00:11:21.175 Virtualization Management: Not Supported 00:11:21.175 Doorbell Buffer Config: Supported 00:11:21.175 Get LBA Status Capability: Not Supported 00:11:21.175 Command & Feature Lockdown Capability: Not Supported 00:11:21.175 Abort Command Limit: 4 00:11:21.175 Async Event Request Limit: 4 00:11:21.175 Number of Firmware Slots: N/A 00:11:21.175 Firmware Slot 1 Read-Only: N/A 00:11:21.175 Firmware Activation Without Reset: N/A 00:11:21.175 Multiple Update Detection Support: N/A 00:11:21.175 Firmware Update Granularity: No Information Provided 00:11:21.175 Per-Namespace SMART Log: Yes 00:11:21.175 Asymmetric Namespace Access Log Page: Not Supported 00:11:21.175 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:21.175 Command Effects Log Page: Supported 00:11:21.175 Get Log Page Extended Data: Supported 00:11:21.175 Telemetry Log Pages: Not Supported 00:11:21.175 Persistent Event Log Pages: Not Supported 00:11:21.175 Supported Log Pages Log Page: May Support 00:11:21.175 Commands Supported & Effects Log Page: Not Supported 00:11:21.175 Feature Identifiers & Effects Log Page:May Support 00:11:21.175 NVMe-MI Commands & Effects Log Page: May Support 00:11:21.175 Data Area 4 for Telemetry Log: Not Supported 00:11:21.175 Error Log Page Entries Supported: 1 00:11:21.175 Keep Alive: Not Supported 00:11:21.175 00:11:21.175 NVM Command Set Attributes 00:11:21.175 ========================== 00:11:21.175 Submission Queue Entry Size 00:11:21.175 Max: 64 00:11:21.175 Min: 64 00:11:21.175 Completion Queue Entry Size 00:11:21.175 Max: 16 00:11:21.175 Min: 16 00:11:21.175 Number of Namespaces: 256 00:11:21.175 Compare Command: Supported 00:11:21.175 Write Uncorrectable Command: Not Supported 00:11:21.175 Dataset Management Command: Supported 00:11:21.175 Write Zeroes Command: Supported 00:11:21.175 Set Features Save Field: Supported 00:11:21.175 Reservations: Not Supported 00:11:21.175 Timestamp: Supported 00:11:21.175 Copy: Supported 00:11:21.175 Volatile Write Cache: Present 00:11:21.175 Atomic Write Unit (Normal): 1 00:11:21.175 Atomic Write Unit (PFail): 1 00:11:21.175 Atomic Compare & Write Unit: 1 00:11:21.175 Fused Compare & Write: Not Supported 00:11:21.175 Scatter-Gather List 00:11:21.175 SGL Command Set: Supported 00:11:21.175 SGL Keyed: Not Supported 00:11:21.175 SGL Bit Bucket Descriptor: Not Supported 00:11:21.175 SGL Metadata Pointer: Not Supported 00:11:21.175 Oversized SGL: Not Supported 00:11:21.175 SGL Metadata Address: Not Supported 00:11:21.175 SGL Offset: Not Supported 00:11:21.175 Transport SGL Data Block: Not Supported 00:11:21.175 Replay Protected Memory Block: Not Supported 00:11:21.175 00:11:21.175 Firmware Slot Information 00:11:21.175 ========================= 00:11:21.175 Active slot: 1 00:11:21.175 Slot 1 Firmware Revision: 1.0 00:11:21.175 00:11:21.175 00:11:21.175 Commands Supported and Effects 00:11:21.175 ============================== 00:11:21.175 Admin Commands 00:11:21.175 -------------- 00:11:21.175 Delete I/O Submission Queue (00h): Supported 00:11:21.175 Create I/O Submission Queue (01h): Supported 00:11:21.175 Get Log Page (02h): Supported 00:11:21.175 Delete I/O Completion Queue (04h): Supported 00:11:21.175 Create I/O Completion Queue (05h): Supported 00:11:21.175 Identify (06h): Supported 00:11:21.175 Abort (08h): Supported 00:11:21.175 Set Features (09h): Supported 00:11:21.175 Get Features (0Ah): Supported 00:11:21.175 Asynchronous Event Request (0Ch): Supported 00:11:21.175 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:21.175 Directive Send (19h): Supported 00:11:21.175 Directive Receive (1Ah): Supported 00:11:21.175 Virtualization Management (1Ch): Supported 00:11:21.175 Doorbell Buffer Config (7Ch): Supported 00:11:21.175 Format NVM (80h): Supported LBA-Change 00:11:21.175 I/O Commands 00:11:21.175 ------------ 00:11:21.175 Flush (00h): Supported LBA-Change 00:11:21.175 Write (01h): Supported LBA-Change 00:11:21.175 Read (02h): Supported 00:11:21.175 Compare (05h): Supported 00:11:21.175 Write Zeroes (08h): Supported LBA-Change 00:11:21.175 Dataset Management (09h): Supported LBA-Change 00:11:21.175 Unknown (0Ch): Supported 00:11:21.175 Unknown (12h): Supported 00:11:21.175 Copy (19h): Supported LBA-Change 00:11:21.175 Unknown (1Dh): Supported LBA-Change 00:11:21.175 00:11:21.175 Error Log 00:11:21.175 ========= 00:11:21.175 00:11:21.175 Arbitration 00:11:21.175 =========== 00:11:21.175 Arbitration Burst: no limit 00:11:21.175 00:11:21.175 Power Management 00:11:21.175 ================ 00:11:21.175 Number of Power States: 1 00:11:21.175 Current Power State: Power State #0 00:11:21.175 Power State #0: 00:11:21.175 Max Power: 25.00 W 00:11:21.175 Non-Operational State: Operational 00:11:21.175 Entry Latency: 16 microseconds 00:11:21.175 Exit Latency: 4 microseconds 00:11:21.175 Relative Read Throughput: 0 00:11:21.175 Relative Read Latency: 0 00:11:21.175 Relative Write Throughput: 0 00:11:21.175 Relative Write Latency: 0 00:11:21.175 Idle Power: Not Reported 00:11:21.175 Active Power: Not Reported 00:11:21.175 Non-Operational Permissive Mode: Not Supported 00:11:21.175 00:11:21.175 Health Information 00:11:21.175 ================== 00:11:21.175 Critical Warnings: 00:11:21.175 Available Spare Space: OK 00:11:21.175 Temperature: OK 00:11:21.175 Device Reliability: OK 00:11:21.175 Read Only: No 00:11:21.175 Volatile Memory Backup: OK 00:11:21.175 Current Temperature: 323 Kelvin (50 Celsius) 00:11:21.175 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:21.175 Available Spare: 0% 00:11:21.176 Available Spare Threshold: 0% 00:11:21.176 Life Percentage Used: 0% 00:11:21.176 Data Units Read: 3652 00:11:21.176 Data Units Written: 1682 00:11:21.176 Host Read Commands: 170820 00:11:21.176 Host Write Commands: 83790 00:11:21.176 Controller Busy Time: 0 minutes 00:11:21.176 Power Cycles: 0 00:11:21.176 Power On Hours: 0 hours 00:11:21.176 Unsafe Shutdowns: 0 00:11:21.176 Unrecoverable Media Errors: 0 00:11:21.176 Lifetime Error Log Entries: 0 00:11:21.176 Warning Temperature Time: 0 minutes 00:11:21.176 Critical Temperature Time: 0 minutes 00:11:21.176 00:11:21.176 Number of Queues 00:11:21.176 ================ 00:11:21.176 Number of I/O Submission Queues: 64 00:11:21.176 Number of I/O Completion Queues: 64 00:11:21.176 00:11:21.176 ZNS Specific Controller Data 00:11:21.176 ============================ 00:11:21.176 Zone Append Size Limit: 0 00:11:21.176 00:11:21.176 00:11:21.176 Active Namespaces 00:11:21.176 ================= 00:11:21.176 Namespace ID:1 00:11:21.176 Error Recovery Timeout: Unlimited 00:11:21.176 Command Set Identifier: NVM (00h) 00:11:21.176 Deallocate: Supported 00:11:21.176 Deallocated/Unwritten Error: Supported 00:11:21.176 Deallocated Read Value: All 0x00 00:11:21.176 Deallocate in Write Zeroes: Not Supported 00:11:21.176 Deallocated Guard Field: 0xFFFF 00:11:21.176 Flush: Supported 00:11:21.176 Reservation: Not Supported 00:11:21.176 Namespace Sharing Capabilities: Private 00:11:21.176 Size (in LBAs): 1048576 (4GiB) 00:11:21.176 Capacity (in LBAs): 1048576 (4GiB) 00:11:21.176 Utilization (in LBAs): 1048576 (4GiB) 00:11:21.176 Thin Provisioning: Not Supported 00:11:21.176 Per-NS Atomic Units: No 00:11:21.176 Maximum Single Source Range Length: 128 00:11:21.176 Maximum Copy Length: 128 00:11:21.176 Maximum Source Range Count: 128 00:11:21.176 NGUID/EUI64 Never Reused: No 00:11:21.176 Namespace Write Protected: No 00:11:21.176 Number of LBA Formats: 8 00:11:21.176 Current LBA Format: LBA Format #04 00:11:21.176 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:21.176 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:21.176 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:21.176 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:21.176 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:21.176 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:21.176 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:21.176 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:21.176 00:11:21.176 Namespace ID:2 00:11:21.176 Error Recovery Timeout: Unlimited 00:11:21.176 Command Set Identifier: NVM (00h) 00:11:21.176 Deallocate: Supported 00:11:21.176 Deallocated/Unwritten Error: Supported 00:11:21.176 Deallocated Read Value: All 0x00 00:11:21.176 Deallocate in Write Zeroes: Not Supported 00:11:21.176 Deallocated Guard Field: 0xFFFF 00:11:21.176 Flush: Supported 00:11:21.176 Reservation: Not Supported 00:11:21.176 Namespace Sharing Capabilities: Private 00:11:21.176 Size (in LBAs): 1048576 (4GiB) 00:11:21.176 Capacity (in LBAs): 1048576 (4GiB) 00:11:21.176 Utilization (in LBAs): 1048576 (4GiB) 00:11:21.176 Thin Provisioning: Not Supported 00:11:21.176 Per-NS Atomic Units: No 00:11:21.176 Maximum Single Source Range Length: 128 00:11:21.176 Maximum Copy Length: 128 00:11:21.176 Maximum Source Range Count: 128 00:11:21.176 NGUID/EUI64 Never Reused: No 00:11:21.176 Namespace Write Protected: No 00:11:21.176 Number of LBA Formats: 8 00:11:21.176 Current LBA Format: LBA Format #04 00:11:21.176 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:21.176 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:21.176 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:21.176 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:21.176 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:21.176 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:21.176 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:21.176 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:21.176 00:11:21.176 Namespace ID:3 00:11:21.176 Error Recovery Timeout: Unlimited 00:11:21.176 Command Set Identifier: NVM (00h) 00:11:21.176 Deallocate: Supported 00:11:21.176 Deallocated/Unwritten Error: Supported 00:11:21.176 Deallocated Read Value: All 0x00 00:11:21.176 Deallocate in Write Zeroes: Not Supported 00:11:21.176 Deallocated Guard Field: 0xFFFF 00:11:21.176 Flush: Supported 00:11:21.176 Reservation: Not Supported 00:11:21.176 Namespace Sharing Capabilities: Private 00:11:21.176 Size (in LBAs): 1048576 (4GiB) 00:11:21.176 Capacity (in LBAs): 1048576 (4GiB) 00:11:21.176 Utilization (in LBAs): 1048576 (4GiB) 00:11:21.176 Thin Provisioning: Not Supported 00:11:21.176 Per-NS Atomic Units: No 00:11:21.176 Maximum Single Source Range Length: 128 00:11:21.176 Maximum Copy Length: 128 00:11:21.176 Maximum Source Range Count: 128 00:11:21.176 NGUID/EUI64 Never Reused: No 00:11:21.176 Namespace Write Protected: No 00:11:21.176 Number of LBA Formats: 8 00:11:21.176 Current LBA Format: LBA Format #04 00:11:21.176 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:21.176 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:21.176 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:21.176 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:21.176 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:21.176 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:21.176 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:21.176 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:21.176 00:11:21.176 15:36:42 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:21.176 15:36:42 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:11:21.434 ===================================================== 00:11:21.434 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:21.434 ===================================================== 00:11:21.434 Controller Capabilities/Features 00:11:21.434 ================================ 00:11:21.434 Vendor ID: 1b36 00:11:21.434 Subsystem Vendor ID: 1af4 00:11:21.434 Serial Number: 12340 00:11:21.434 Model Number: QEMU NVMe Ctrl 00:11:21.434 Firmware Version: 8.0.0 00:11:21.434 Recommended Arb Burst: 6 00:11:21.434 IEEE OUI Identifier: 00 54 52 00:11:21.434 Multi-path I/O 00:11:21.434 May have multiple subsystem ports: No 00:11:21.434 May have multiple controllers: No 00:11:21.434 Associated with SR-IOV VF: No 00:11:21.434 Max Data Transfer Size: 524288 00:11:21.434 Max Number of Namespaces: 256 00:11:21.434 Max Number of I/O Queues: 64 00:11:21.434 NVMe Specification Version (VS): 1.4 00:11:21.434 NVMe Specification Version (Identify): 1.4 00:11:21.434 Maximum Queue Entries: 2048 00:11:21.434 Contiguous Queues Required: Yes 00:11:21.434 Arbitration Mechanisms Supported 00:11:21.435 Weighted Round Robin: Not Supported 00:11:21.435 Vendor Specific: Not Supported 00:11:21.435 Reset Timeout: 7500 ms 00:11:21.435 Doorbell Stride: 4 bytes 00:11:21.435 NVM Subsystem Reset: Not Supported 00:11:21.435 Command Sets Supported 00:11:21.435 NVM Command Set: Supported 00:11:21.435 Boot Partition: Not Supported 00:11:21.435 Memory Page Size Minimum: 4096 bytes 00:11:21.435 Memory Page Size Maximum: 65536 bytes 00:11:21.435 Persistent Memory Region: Not Supported 00:11:21.435 Optional Asynchronous Events Supported 00:11:21.435 Namespace Attribute Notices: Supported 00:11:21.435 Firmware Activation Notices: Not Supported 00:11:21.435 ANA Change Notices: Not Supported 00:11:21.435 PLE Aggregate Log Change Notices: Not Supported 00:11:21.435 LBA Status Info Alert Notices: Not Supported 00:11:21.435 EGE Aggregate Log Change Notices: Not Supported 00:11:21.435 Normal NVM Subsystem Shutdown event: Not Supported 00:11:21.435 Zone Descriptor Change Notices: Not Supported 00:11:21.435 Discovery Log Change Notices: Not Supported 00:11:21.435 Controller Attributes 00:11:21.435 128-bit Host Identifier: Not Supported 00:11:21.435 Non-Operational Permissive Mode: Not Supported 00:11:21.435 NVM Sets: Not Supported 00:11:21.435 Read Recovery Levels: Not Supported 00:11:21.435 Endurance Groups: Not Supported 00:11:21.435 Predictable Latency Mode: Not Supported 00:11:21.435 Traffic Based Keep ALive: Not Supported 00:11:21.435 Namespace Granularity: Not Supported 00:11:21.435 SQ Associations: Not Supported 00:11:21.435 UUID List: Not Supported 00:11:21.435 Multi-Domain Subsystem: Not Supported 00:11:21.435 Fixed Capacity Management: Not Supported 00:11:21.435 Variable Capacity Management: Not Supported 00:11:21.435 Delete Endurance Group: Not Supported 00:11:21.435 Delete NVM Set: Not Supported 00:11:21.435 Extended LBA Formats Supported: Supported 00:11:21.435 Flexible Data Placement Supported: Not Supported 00:11:21.435 00:11:21.435 Controller Memory Buffer Support 00:11:21.435 ================================ 00:11:21.435 Supported: No 00:11:21.435 00:11:21.435 Persistent Memory Region Support 00:11:21.435 ================================ 00:11:21.435 Supported: No 00:11:21.435 00:11:21.435 Admin Command Set Attributes 00:11:21.435 ============================ 00:11:21.435 Security Send/Receive: Not Supported 00:11:21.435 Format NVM: Supported 00:11:21.435 Firmware Activate/Download: Not Supported 00:11:21.435 Namespace Management: Supported 00:11:21.435 Device Self-Test: Not Supported 00:11:21.435 Directives: Supported 00:11:21.435 NVMe-MI: Not Supported 00:11:21.435 Virtualization Management: Not Supported 00:11:21.435 Doorbell Buffer Config: Supported 00:11:21.435 Get LBA Status Capability: Not Supported 00:11:21.435 Command & Feature Lockdown Capability: Not Supported 00:11:21.435 Abort Command Limit: 4 00:11:21.435 Async Event Request Limit: 4 00:11:21.435 Number of Firmware Slots: N/A 00:11:21.435 Firmware Slot 1 Read-Only: N/A 00:11:21.435 Firmware Activation Without Reset: N/A 00:11:21.435 Multiple Update Detection Support: N/A 00:11:21.435 Firmware Update Granularity: No Information Provided 00:11:21.435 Per-Namespace SMART Log: Yes 00:11:21.435 Asymmetric Namespace Access Log Page: Not Supported 00:11:21.435 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:21.435 Command Effects Log Page: Supported 00:11:21.435 Get Log Page Extended Data: Supported 00:11:21.435 Telemetry Log Pages: Not Supported 00:11:21.435 Persistent Event Log Pages: Not Supported 00:11:21.435 Supported Log Pages Log Page: May Support 00:11:21.435 Commands Supported & Effects Log Page: Not Supported 00:11:21.435 Feature Identifiers & Effects Log Page:May Support 00:11:21.435 NVMe-MI Commands & Effects Log Page: May Support 00:11:21.435 Data Area 4 for Telemetry Log: Not Supported 00:11:21.435 Error Log Page Entries Supported: 1 00:11:21.435 Keep Alive: Not Supported 00:11:21.435 00:11:21.435 NVM Command Set Attributes 00:11:21.435 ========================== 00:11:21.435 Submission Queue Entry Size 00:11:21.435 Max: 64 00:11:21.435 Min: 64 00:11:21.435 Completion Queue Entry Size 00:11:21.435 Max: 16 00:11:21.435 Min: 16 00:11:21.435 Number of Namespaces: 256 00:11:21.435 Compare Command: Supported 00:11:21.435 Write Uncorrectable Command: Not Supported 00:11:21.435 Dataset Management Command: Supported 00:11:21.435 Write Zeroes Command: Supported 00:11:21.435 Set Features Save Field: Supported 00:11:21.435 Reservations: Not Supported 00:11:21.435 Timestamp: Supported 00:11:21.435 Copy: Supported 00:11:21.435 Volatile Write Cache: Present 00:11:21.435 Atomic Write Unit (Normal): 1 00:11:21.435 Atomic Write Unit (PFail): 1 00:11:21.435 Atomic Compare & Write Unit: 1 00:11:21.435 Fused Compare & Write: Not Supported 00:11:21.435 Scatter-Gather List 00:11:21.435 SGL Command Set: Supported 00:11:21.435 SGL Keyed: Not Supported 00:11:21.435 SGL Bit Bucket Descriptor: Not Supported 00:11:21.435 SGL Metadata Pointer: Not Supported 00:11:21.435 Oversized SGL: Not Supported 00:11:21.435 SGL Metadata Address: Not Supported 00:11:21.435 SGL Offset: Not Supported 00:11:21.435 Transport SGL Data Block: Not Supported 00:11:21.435 Replay Protected Memory Block: Not Supported 00:11:21.435 00:11:21.435 Firmware Slot Information 00:11:21.435 ========================= 00:11:21.435 Active slot: 1 00:11:21.435 Slot 1 Firmware Revision: 1.0 00:11:21.435 00:11:21.435 00:11:21.435 Commands Supported and Effects 00:11:21.435 ============================== 00:11:21.435 Admin Commands 00:11:21.435 -------------- 00:11:21.435 Delete I/O Submission Queue (00h): Supported 00:11:21.435 Create I/O Submission Queue (01h): Supported 00:11:21.435 Get Log Page (02h): Supported 00:11:21.435 Delete I/O Completion Queue (04h): Supported 00:11:21.435 Create I/O Completion Queue (05h): Supported 00:11:21.435 Identify (06h): Supported 00:11:21.435 Abort (08h): Supported 00:11:21.435 Set Features (09h): Supported 00:11:21.435 Get Features (0Ah): Supported 00:11:21.435 Asynchronous Event Request (0Ch): Supported 00:11:21.435 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:21.435 Directive Send (19h): Supported 00:11:21.435 Directive Receive (1Ah): Supported 00:11:21.435 Virtualization Management (1Ch): Supported 00:11:21.435 Doorbell Buffer Config (7Ch): Supported 00:11:21.435 Format NVM (80h): Supported LBA-Change 00:11:21.435 I/O Commands 00:11:21.435 ------------ 00:11:21.435 Flush (00h): Supported LBA-Change 00:11:21.435 Write (01h): Supported LBA-Change 00:11:21.435 Read (02h): Supported 00:11:21.435 Compare (05h): Supported 00:11:21.435 Write Zeroes (08h): Supported LBA-Change 00:11:21.435 Dataset Management (09h): Supported LBA-Change 00:11:21.435 Unknown (0Ch): Supported 00:11:21.435 Unknown (12h): Supported 00:11:21.435 Copy (19h): Supported LBA-Change 00:11:21.435 Unknown (1Dh): Supported LBA-Change 00:11:21.435 00:11:21.435 Error Log 00:11:21.435 ========= 00:11:21.435 00:11:21.435 Arbitration 00:11:21.435 =========== 00:11:21.435 Arbitration Burst: no limit 00:11:21.435 00:11:21.435 Power Management 00:11:21.435 ================ 00:11:21.435 Number of Power States: 1 00:11:21.435 Current Power State: Power State #0 00:11:21.435 Power State #0: 00:11:21.435 Max Power: 25.00 W 00:11:21.435 Non-Operational State: Operational 00:11:21.435 Entry Latency: 16 microseconds 00:11:21.435 Exit Latency: 4 microseconds 00:11:21.435 Relative Read Throughput: 0 00:11:21.435 Relative Read Latency: 0 00:11:21.435 Relative Write Throughput: 0 00:11:21.435 Relative Write Latency: 0 00:11:21.435 Idle Power: Not Reported 00:11:21.435 Active Power: Not Reported 00:11:21.436 Non-Operational Permissive Mode: Not Supported 00:11:21.436 00:11:21.436 Health Information 00:11:21.436 ================== 00:11:21.436 Critical Warnings: 00:11:21.436 Available Spare Space: OK 00:11:21.436 Temperature: OK 00:11:21.436 Device Reliability: OK 00:11:21.436 Read Only: No 00:11:21.436 Volatile Memory Backup: OK 00:11:21.436 Current Temperature: 323 Kelvin (50 Celsius) 00:11:21.436 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:21.436 Available Spare: 0% 00:11:21.436 Available Spare Threshold: 0% 00:11:21.436 Life Percentage Used: 0% 00:11:21.436 Data Units Read: 1723 00:11:21.436 Data Units Written: 797 00:11:21.436 Host Read Commands: 81088 00:11:21.436 Host Write Commands: 40276 00:11:21.436 Controller Busy Time: 0 minutes 00:11:21.436 Power Cycles: 0 00:11:21.436 Power On Hours: 0 hours 00:11:21.436 Unsafe Shutdowns: 0 00:11:21.436 Unrecoverable Media Errors: 0 00:11:21.436 Lifetime Error Log Entries: 0 00:11:21.436 Warning Temperature Time: 0 minutes 00:11:21.436 Critical Temperature Time: 0 minutes 00:11:21.436 00:11:21.436 Number of Queues 00:11:21.436 ================ 00:11:21.436 Number of I/O Submission Queues: 64 00:11:21.436 Number of I/O Completion Queues: 64 00:11:21.436 00:11:21.436 ZNS Specific Controller Data 00:11:21.436 ============================ 00:11:21.436 Zone Append Size Limit: 0 00:11:21.436 00:11:21.436 00:11:21.436 Active Namespaces 00:11:21.436 ================= 00:11:21.436 Namespace ID:1 00:11:21.436 Error Recovery Timeout: Unlimited 00:11:21.436 Command Set Identifier: NVM (00h) 00:11:21.436 Deallocate: Supported 00:11:21.436 Deallocated/Unwritten Error: Supported 00:11:21.436 Deallocated Read Value: All 0x00 00:11:21.436 Deallocate in Write Zeroes: Not Supported 00:11:21.436 Deallocated Guard Field: 0xFFFF 00:11:21.436 Flush: Supported 00:11:21.436 Reservation: Not Supported 00:11:21.436 Metadata Transferred as: Separate Metadata Buffer 00:11:21.436 Namespace Sharing Capabilities: Private 00:11:21.436 Size (in LBAs): 1548666 (5GiB) 00:11:21.436 Capacity (in LBAs): 1548666 (5GiB) 00:11:21.436 Utilization (in LBAs): 1548666 (5GiB) 00:11:21.436 Thin Provisioning: Not Supported 00:11:21.436 Per-NS Atomic Units: No 00:11:21.436 Maximum Single Source Range Length: 128 00:11:21.436 Maximum Copy Length: 128 00:11:21.436 Maximum Source Range Count: 128 00:11:21.436 NGUID/EUI64 Never Reused: No 00:11:21.436 Namespace Write Protected: No 00:11:21.436 Number of LBA Formats: 8 00:11:21.436 Current LBA Format: LBA Format #07 00:11:21.436 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:21.436 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:21.436 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:21.436 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:21.436 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:21.436 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:21.436 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:21.436 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:21.436 00:11:21.436 15:36:42 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:21.436 15:36:42 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:11:22.002 ===================================================== 00:11:22.002 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:22.002 ===================================================== 00:11:22.002 Controller Capabilities/Features 00:11:22.002 ================================ 00:11:22.002 Vendor ID: 1b36 00:11:22.002 Subsystem Vendor ID: 1af4 00:11:22.002 Serial Number: 12341 00:11:22.002 Model Number: QEMU NVMe Ctrl 00:11:22.002 Firmware Version: 8.0.0 00:11:22.002 Recommended Arb Burst: 6 00:11:22.002 IEEE OUI Identifier: 00 54 52 00:11:22.002 Multi-path I/O 00:11:22.002 May have multiple subsystem ports: No 00:11:22.002 May have multiple controllers: No 00:11:22.002 Associated with SR-IOV VF: No 00:11:22.002 Max Data Transfer Size: 524288 00:11:22.002 Max Number of Namespaces: 256 00:11:22.002 Max Number of I/O Queues: 64 00:11:22.002 NVMe Specification Version (VS): 1.4 00:11:22.002 NVMe Specification Version (Identify): 1.4 00:11:22.002 Maximum Queue Entries: 2048 00:11:22.002 Contiguous Queues Required: Yes 00:11:22.002 Arbitration Mechanisms Supported 00:11:22.002 Weighted Round Robin: Not Supported 00:11:22.002 Vendor Specific: Not Supported 00:11:22.002 Reset Timeout: 7500 ms 00:11:22.002 Doorbell Stride: 4 bytes 00:11:22.002 NVM Subsystem Reset: Not Supported 00:11:22.002 Command Sets Supported 00:11:22.002 NVM Command Set: Supported 00:11:22.002 Boot Partition: Not Supported 00:11:22.002 Memory Page Size Minimum: 4096 bytes 00:11:22.002 Memory Page Size Maximum: 65536 bytes 00:11:22.002 Persistent Memory Region: Not Supported 00:11:22.002 Optional Asynchronous Events Supported 00:11:22.002 Namespace Attribute Notices: Supported 00:11:22.002 Firmware Activation Notices: Not Supported 00:11:22.002 ANA Change Notices: Not Supported 00:11:22.002 PLE Aggregate Log Change Notices: Not Supported 00:11:22.002 LBA Status Info Alert Notices: Not Supported 00:11:22.002 EGE Aggregate Log Change Notices: Not Supported 00:11:22.002 Normal NVM Subsystem Shutdown event: Not Supported 00:11:22.002 Zone Descriptor Change Notices: Not Supported 00:11:22.002 Discovery Log Change Notices: Not Supported 00:11:22.002 Controller Attributes 00:11:22.002 128-bit Host Identifier: Not Supported 00:11:22.002 Non-Operational Permissive Mode: Not Supported 00:11:22.002 NVM Sets: Not Supported 00:11:22.002 Read Recovery Levels: Not Supported 00:11:22.002 Endurance Groups: Not Supported 00:11:22.002 Predictable Latency Mode: Not Supported 00:11:22.002 Traffic Based Keep ALive: Not Supported 00:11:22.002 Namespace Granularity: Not Supported 00:11:22.002 SQ Associations: Not Supported 00:11:22.002 UUID List: Not Supported 00:11:22.002 Multi-Domain Subsystem: Not Supported 00:11:22.002 Fixed Capacity Management: Not Supported 00:11:22.002 Variable Capacity Management: Not Supported 00:11:22.002 Delete Endurance Group: Not Supported 00:11:22.002 Delete NVM Set: Not Supported 00:11:22.002 Extended LBA Formats Supported: Supported 00:11:22.002 Flexible Data Placement Supported: Not Supported 00:11:22.002 00:11:22.002 Controller Memory Buffer Support 00:11:22.002 ================================ 00:11:22.002 Supported: No 00:11:22.002 00:11:22.002 Persistent Memory Region Support 00:11:22.002 ================================ 00:11:22.002 Supported: No 00:11:22.002 00:11:22.002 Admin Command Set Attributes 00:11:22.002 ============================ 00:11:22.002 Security Send/Receive: Not Supported 00:11:22.002 Format NVM: Supported 00:11:22.002 Firmware Activate/Download: Not Supported 00:11:22.002 Namespace Management: Supported 00:11:22.002 Device Self-Test: Not Supported 00:11:22.002 Directives: Supported 00:11:22.002 NVMe-MI: Not Supported 00:11:22.002 Virtualization Management: Not Supported 00:11:22.002 Doorbell Buffer Config: Supported 00:11:22.002 Get LBA Status Capability: Not Supported 00:11:22.002 Command & Feature Lockdown Capability: Not Supported 00:11:22.002 Abort Command Limit: 4 00:11:22.002 Async Event Request Limit: 4 00:11:22.002 Number of Firmware Slots: N/A 00:11:22.002 Firmware Slot 1 Read-Only: N/A 00:11:22.002 Firmware Activation Without Reset: N/A 00:11:22.002 Multiple Update Detection Support: N/A 00:11:22.002 Firmware Update Granularity: No Information Provided 00:11:22.002 Per-Namespace SMART Log: Yes 00:11:22.002 Asymmetric Namespace Access Log Page: Not Supported 00:11:22.002 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:22.002 Command Effects Log Page: Supported 00:11:22.002 Get Log Page Extended Data: Supported 00:11:22.002 Telemetry Log Pages: Not Supported 00:11:22.002 Persistent Event Log Pages: Not Supported 00:11:22.002 Supported Log Pages Log Page: May Support 00:11:22.002 Commands Supported & Effects Log Page: Not Supported 00:11:22.002 Feature Identifiers & Effects Log Page:May Support 00:11:22.002 NVMe-MI Commands & Effects Log Page: May Support 00:11:22.002 Data Area 4 for Telemetry Log: Not Supported 00:11:22.002 Error Log Page Entries Supported: 1 00:11:22.002 Keep Alive: Not Supported 00:11:22.002 00:11:22.002 NVM Command Set Attributes 00:11:22.002 ========================== 00:11:22.002 Submission Queue Entry Size 00:11:22.002 Max: 64 00:11:22.002 Min: 64 00:11:22.002 Completion Queue Entry Size 00:11:22.002 Max: 16 00:11:22.002 Min: 16 00:11:22.002 Number of Namespaces: 256 00:11:22.002 Compare Command: Supported 00:11:22.002 Write Uncorrectable Command: Not Supported 00:11:22.002 Dataset Management Command: Supported 00:11:22.002 Write Zeroes Command: Supported 00:11:22.003 Set Features Save Field: Supported 00:11:22.003 Reservations: Not Supported 00:11:22.003 Timestamp: Supported 00:11:22.003 Copy: Supported 00:11:22.003 Volatile Write Cache: Present 00:11:22.003 Atomic Write Unit (Normal): 1 00:11:22.003 Atomic Write Unit (PFail): 1 00:11:22.003 Atomic Compare & Write Unit: 1 00:11:22.003 Fused Compare & Write: Not Supported 00:11:22.003 Scatter-Gather List 00:11:22.003 SGL Command Set: Supported 00:11:22.003 SGL Keyed: Not Supported 00:11:22.003 SGL Bit Bucket Descriptor: Not Supported 00:11:22.003 SGL Metadata Pointer: Not Supported 00:11:22.003 Oversized SGL: Not Supported 00:11:22.003 SGL Metadata Address: Not Supported 00:11:22.003 SGL Offset: Not Supported 00:11:22.003 Transport SGL Data Block: Not Supported 00:11:22.003 Replay Protected Memory Block: Not Supported 00:11:22.003 00:11:22.003 Firmware Slot Information 00:11:22.003 ========================= 00:11:22.003 Active slot: 1 00:11:22.003 Slot 1 Firmware Revision: 1.0 00:11:22.003 00:11:22.003 00:11:22.003 Commands Supported and Effects 00:11:22.003 ============================== 00:11:22.003 Admin Commands 00:11:22.003 -------------- 00:11:22.003 Delete I/O Submission Queue (00h): Supported 00:11:22.003 Create I/O Submission Queue (01h): Supported 00:11:22.003 Get Log Page (02h): Supported 00:11:22.003 Delete I/O Completion Queue (04h): Supported 00:11:22.003 Create I/O Completion Queue (05h): Supported 00:11:22.003 Identify (06h): Supported 00:11:22.003 Abort (08h): Supported 00:11:22.003 Set Features (09h): Supported 00:11:22.003 Get Features (0Ah): Supported 00:11:22.003 Asynchronous Event Request (0Ch): Supported 00:11:22.003 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:22.003 Directive Send (19h): Supported 00:11:22.003 Directive Receive (1Ah): Supported 00:11:22.003 Virtualization Management (1Ch): Supported 00:11:22.003 Doorbell Buffer Config (7Ch): Supported 00:11:22.003 Format NVM (80h): Supported LBA-Change 00:11:22.003 I/O Commands 00:11:22.003 ------------ 00:11:22.003 Flush (00h): Supported LBA-Change 00:11:22.003 Write (01h): Supported LBA-Change 00:11:22.003 Read (02h): Supported 00:11:22.003 Compare (05h): Supported 00:11:22.003 Write Zeroes (08h): Supported LBA-Change 00:11:22.003 Dataset Management (09h): Supported LBA-Change 00:11:22.003 Unknown (0Ch): Supported 00:11:22.003 Unknown (12h): Supported 00:11:22.003 Copy (19h): Supported LBA-Change 00:11:22.003 Unknown (1Dh): Supported LBA-Change 00:11:22.003 00:11:22.003 Error Log 00:11:22.003 ========= 00:11:22.003 00:11:22.003 Arbitration 00:11:22.003 =========== 00:11:22.003 Arbitration Burst: no limit 00:11:22.003 00:11:22.003 Power Management 00:11:22.003 ================ 00:11:22.003 Number of Power States: 1 00:11:22.003 Current Power State: Power State #0 00:11:22.003 Power State #0: 00:11:22.003 Max Power: 25.00 W 00:11:22.003 Non-Operational State: Operational 00:11:22.003 Entry Latency: 16 microseconds 00:11:22.003 Exit Latency: 4 microseconds 00:11:22.003 Relative Read Throughput: 0 00:11:22.003 Relative Read Latency: 0 00:11:22.003 Relative Write Throughput: 0 00:11:22.003 Relative Write Latency: 0 00:11:22.003 Idle Power: Not Reported 00:11:22.003 Active Power: Not Reported 00:11:22.003 Non-Operational Permissive Mode: Not Supported 00:11:22.003 00:11:22.003 Health Information 00:11:22.003 ================== 00:11:22.003 Critical Warnings: 00:11:22.003 Available Spare Space: OK 00:11:22.003 Temperature: OK 00:11:22.003 Device Reliability: OK 00:11:22.003 Read Only: No 00:11:22.003 Volatile Memory Backup: OK 00:11:22.003 Current Temperature: 323 Kelvin (50 Celsius) 00:11:22.003 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:22.003 Available Spare: 0% 00:11:22.003 Available Spare Threshold: 0% 00:11:22.003 Life Percentage Used: 0% 00:11:22.003 Data Units Read: 1182 00:11:22.003 Data Units Written: 548 00:11:22.003 Host Read Commands: 56511 00:11:22.003 Host Write Commands: 27780 00:11:22.003 Controller Busy Time: 0 minutes 00:11:22.003 Power Cycles: 0 00:11:22.003 Power On Hours: 0 hours 00:11:22.003 Unsafe Shutdowns: 0 00:11:22.003 Unrecoverable Media Errors: 0 00:11:22.003 Lifetime Error Log Entries: 0 00:11:22.003 Warning Temperature Time: 0 minutes 00:11:22.003 Critical Temperature Time: 0 minutes 00:11:22.003 00:11:22.003 Number of Queues 00:11:22.003 ================ 00:11:22.003 Number of I/O Submission Queues: 64 00:11:22.003 Number of I/O Completion Queues: 64 00:11:22.003 00:11:22.003 ZNS Specific Controller Data 00:11:22.003 ============================ 00:11:22.003 Zone Append Size Limit: 0 00:11:22.003 00:11:22.003 00:11:22.003 Active Namespaces 00:11:22.003 ================= 00:11:22.003 Namespace ID:1 00:11:22.003 Error Recovery Timeout: Unlimited 00:11:22.003 Command Set Identifier: NVM (00h) 00:11:22.003 Deallocate: Supported 00:11:22.003 Deallocated/Unwritten Error: Supported 00:11:22.003 Deallocated Read Value: All 0x00 00:11:22.003 Deallocate in Write Zeroes: Not Supported 00:11:22.003 Deallocated Guard Field: 0xFFFF 00:11:22.003 Flush: Supported 00:11:22.003 Reservation: Not Supported 00:11:22.003 Namespace Sharing Capabilities: Private 00:11:22.003 Size (in LBAs): 1310720 (5GiB) 00:11:22.003 Capacity (in LBAs): 1310720 (5GiB) 00:11:22.003 Utilization (in LBAs): 1310720 (5GiB) 00:11:22.003 Thin Provisioning: Not Supported 00:11:22.003 Per-NS Atomic Units: No 00:11:22.003 Maximum Single Source Range Length: 128 00:11:22.003 Maximum Copy Length: 128 00:11:22.003 Maximum Source Range Count: 128 00:11:22.003 NGUID/EUI64 Never Reused: No 00:11:22.003 Namespace Write Protected: No 00:11:22.003 Number of LBA Formats: 8 00:11:22.003 Current LBA Format: LBA Format #04 00:11:22.003 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:22.003 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:22.003 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:22.003 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:22.003 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:22.003 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:22.003 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:22.003 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:22.003 00:11:22.003 15:36:43 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:22.003 15:36:43 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:11:22.261 ===================================================== 00:11:22.261 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:22.261 ===================================================== 00:11:22.261 Controller Capabilities/Features 00:11:22.261 ================================ 00:11:22.261 Vendor ID: 1b36 00:11:22.261 Subsystem Vendor ID: 1af4 00:11:22.261 Serial Number: 12342 00:11:22.261 Model Number: QEMU NVMe Ctrl 00:11:22.261 Firmware Version: 8.0.0 00:11:22.261 Recommended Arb Burst: 6 00:11:22.261 IEEE OUI Identifier: 00 54 52 00:11:22.261 Multi-path I/O 00:11:22.261 May have multiple subsystem ports: No 00:11:22.261 May have multiple controllers: No 00:11:22.261 Associated with SR-IOV VF: No 00:11:22.261 Max Data Transfer Size: 524288 00:11:22.261 Max Number of Namespaces: 256 00:11:22.261 Max Number of I/O Queues: 64 00:11:22.261 NVMe Specification Version (VS): 1.4 00:11:22.261 NVMe Specification Version (Identify): 1.4 00:11:22.261 Maximum Queue Entries: 2048 00:11:22.261 Contiguous Queues Required: Yes 00:11:22.261 Arbitration Mechanisms Supported 00:11:22.261 Weighted Round Robin: Not Supported 00:11:22.261 Vendor Specific: Not Supported 00:11:22.261 Reset Timeout: 7500 ms 00:11:22.261 Doorbell Stride: 4 bytes 00:11:22.261 NVM Subsystem Reset: Not Supported 00:11:22.261 Command Sets Supported 00:11:22.261 NVM Command Set: Supported 00:11:22.261 Boot Partition: Not Supported 00:11:22.261 Memory Page Size Minimum: 4096 bytes 00:11:22.261 Memory Page Size Maximum: 65536 bytes 00:11:22.261 Persistent Memory Region: Not Supported 00:11:22.261 Optional Asynchronous Events Supported 00:11:22.261 Namespace Attribute Notices: Supported 00:11:22.261 Firmware Activation Notices: Not Supported 00:11:22.261 ANA Change Notices: Not Supported 00:11:22.261 PLE Aggregate Log Change Notices: Not Supported 00:11:22.261 LBA Status Info Alert Notices: Not Supported 00:11:22.261 EGE Aggregate Log Change Notices: Not Supported 00:11:22.261 Normal NVM Subsystem Shutdown event: Not Supported 00:11:22.261 Zone Descriptor Change Notices: Not Supported 00:11:22.261 Discovery Log Change Notices: Not Supported 00:11:22.261 Controller Attributes 00:11:22.261 128-bit Host Identifier: Not Supported 00:11:22.262 Non-Operational Permissive Mode: Not Supported 00:11:22.262 NVM Sets: Not Supported 00:11:22.262 Read Recovery Levels: Not Supported 00:11:22.262 Endurance Groups: Not Supported 00:11:22.262 Predictable Latency Mode: Not Supported 00:11:22.262 Traffic Based Keep ALive: Not Supported 00:11:22.262 Namespace Granularity: Not Supported 00:11:22.262 SQ Associations: Not Supported 00:11:22.262 UUID List: Not Supported 00:11:22.262 Multi-Domain Subsystem: Not Supported 00:11:22.262 Fixed Capacity Management: Not Supported 00:11:22.262 Variable Capacity Management: Not Supported 00:11:22.262 Delete Endurance Group: Not Supported 00:11:22.262 Delete NVM Set: Not Supported 00:11:22.262 Extended LBA Formats Supported: Supported 00:11:22.262 Flexible Data Placement Supported: Not Supported 00:11:22.262 00:11:22.262 Controller Memory Buffer Support 00:11:22.262 ================================ 00:11:22.262 Supported: No 00:11:22.262 00:11:22.262 Persistent Memory Region Support 00:11:22.262 ================================ 00:11:22.262 Supported: No 00:11:22.262 00:11:22.262 Admin Command Set Attributes 00:11:22.262 ============================ 00:11:22.262 Security Send/Receive: Not Supported 00:11:22.262 Format NVM: Supported 00:11:22.262 Firmware Activate/Download: Not Supported 00:11:22.262 Namespace Management: Supported 00:11:22.262 Device Self-Test: Not Supported 00:11:22.262 Directives: Supported 00:11:22.262 NVMe-MI: Not Supported 00:11:22.262 Virtualization Management: Not Supported 00:11:22.262 Doorbell Buffer Config: Supported 00:11:22.262 Get LBA Status Capability: Not Supported 00:11:22.262 Command & Feature Lockdown Capability: Not Supported 00:11:22.262 Abort Command Limit: 4 00:11:22.262 Async Event Request Limit: 4 00:11:22.262 Number of Firmware Slots: N/A 00:11:22.262 Firmware Slot 1 Read-Only: N/A 00:11:22.262 Firmware Activation Without Reset: N/A 00:11:22.262 Multiple Update Detection Support: N/A 00:11:22.262 Firmware Update Granularity: No Information Provided 00:11:22.262 Per-Namespace SMART Log: Yes 00:11:22.262 Asymmetric Namespace Access Log Page: Not Supported 00:11:22.262 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:22.262 Command Effects Log Page: Supported 00:11:22.262 Get Log Page Extended Data: Supported 00:11:22.262 Telemetry Log Pages: Not Supported 00:11:22.262 Persistent Event Log Pages: Not Supported 00:11:22.262 Supported Log Pages Log Page: May Support 00:11:22.262 Commands Supported & Effects Log Page: Not Supported 00:11:22.262 Feature Identifiers & Effects Log Page:May Support 00:11:22.262 NVMe-MI Commands & Effects Log Page: May Support 00:11:22.262 Data Area 4 for Telemetry Log: Not Supported 00:11:22.262 Error Log Page Entries Supported: 1 00:11:22.262 Keep Alive: Not Supported 00:11:22.262 00:11:22.262 NVM Command Set Attributes 00:11:22.262 ========================== 00:11:22.262 Submission Queue Entry Size 00:11:22.262 Max: 64 00:11:22.262 Min: 64 00:11:22.262 Completion Queue Entry Size 00:11:22.262 Max: 16 00:11:22.262 Min: 16 00:11:22.262 Number of Namespaces: 256 00:11:22.262 Compare Command: Supported 00:11:22.262 Write Uncorrectable Command: Not Supported 00:11:22.262 Dataset Management Command: Supported 00:11:22.262 Write Zeroes Command: Supported 00:11:22.262 Set Features Save Field: Supported 00:11:22.262 Reservations: Not Supported 00:11:22.262 Timestamp: Supported 00:11:22.262 Copy: Supported 00:11:22.262 Volatile Write Cache: Present 00:11:22.262 Atomic Write Unit (Normal): 1 00:11:22.262 Atomic Write Unit (PFail): 1 00:11:22.262 Atomic Compare & Write Unit: 1 00:11:22.262 Fused Compare & Write: Not Supported 00:11:22.262 Scatter-Gather List 00:11:22.262 SGL Command Set: Supported 00:11:22.262 SGL Keyed: Not Supported 00:11:22.262 SGL Bit Bucket Descriptor: Not Supported 00:11:22.262 SGL Metadata Pointer: Not Supported 00:11:22.262 Oversized SGL: Not Supported 00:11:22.262 SGL Metadata Address: Not Supported 00:11:22.262 SGL Offset: Not Supported 00:11:22.262 Transport SGL Data Block: Not Supported 00:11:22.262 Replay Protected Memory Block: Not Supported 00:11:22.262 00:11:22.262 Firmware Slot Information 00:11:22.262 ========================= 00:11:22.262 Active slot: 1 00:11:22.262 Slot 1 Firmware Revision: 1.0 00:11:22.262 00:11:22.262 00:11:22.262 Commands Supported and Effects 00:11:22.262 ============================== 00:11:22.262 Admin Commands 00:11:22.262 -------------- 00:11:22.262 Delete I/O Submission Queue (00h): Supported 00:11:22.262 Create I/O Submission Queue (01h): Supported 00:11:22.262 Get Log Page (02h): Supported 00:11:22.262 Delete I/O Completion Queue (04h): Supported 00:11:22.262 Create I/O Completion Queue (05h): Supported 00:11:22.262 Identify (06h): Supported 00:11:22.262 Abort (08h): Supported 00:11:22.262 Set Features (09h): Supported 00:11:22.262 Get Features (0Ah): Supported 00:11:22.262 Asynchronous Event Request (0Ch): Supported 00:11:22.262 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:22.262 Directive Send (19h): Supported 00:11:22.262 Directive Receive (1Ah): Supported 00:11:22.262 Virtualization Management (1Ch): Supported 00:11:22.262 Doorbell Buffer Config (7Ch): Supported 00:11:22.262 Format NVM (80h): Supported LBA-Change 00:11:22.262 I/O Commands 00:11:22.262 ------------ 00:11:22.262 Flush (00h): Supported LBA-Change 00:11:22.262 Write (01h): Supported LBA-Change 00:11:22.262 Read (02h): Supported 00:11:22.262 Compare (05h): Supported 00:11:22.262 Write Zeroes (08h): Supported LBA-Change 00:11:22.262 Dataset Management (09h): Supported LBA-Change 00:11:22.262 Unknown (0Ch): Supported 00:11:22.262 Unknown (12h): Supported 00:11:22.262 Copy (19h): Supported LBA-Change 00:11:22.262 Unknown (1Dh): Supported LBA-Change 00:11:22.262 00:11:22.262 Error Log 00:11:22.262 ========= 00:11:22.262 00:11:22.262 Arbitration 00:11:22.262 =========== 00:11:22.262 Arbitration Burst: no limit 00:11:22.262 00:11:22.262 Power Management 00:11:22.262 ================ 00:11:22.262 Number of Power States: 1 00:11:22.262 Current Power State: Power State #0 00:11:22.262 Power State #0: 00:11:22.262 Max Power: 25.00 W 00:11:22.262 Non-Operational State: Operational 00:11:22.262 Entry Latency: 16 microseconds 00:11:22.262 Exit Latency: 4 microseconds 00:11:22.262 Relative Read Throughput: 0 00:11:22.262 Relative Read Latency: 0 00:11:22.262 Relative Write Throughput: 0 00:11:22.262 Relative Write Latency: 0 00:11:22.262 Idle Power: Not Reported 00:11:22.262 Active Power: Not Reported 00:11:22.262 Non-Operational Permissive Mode: Not Supported 00:11:22.262 00:11:22.262 Health Information 00:11:22.262 ================== 00:11:22.262 Critical Warnings: 00:11:22.262 Available Spare Space: OK 00:11:22.262 Temperature: OK 00:11:22.262 Device Reliability: OK 00:11:22.262 Read Only: No 00:11:22.262 Volatile Memory Backup: OK 00:11:22.262 Current Temperature: 323 Kelvin (50 Celsius) 00:11:22.262 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:22.262 Available Spare: 0% 00:11:22.262 Available Spare Threshold: 0% 00:11:22.262 Life Percentage Used: 0% 00:11:22.262 Data Units Read: 3652 00:11:22.262 Data Units Written: 1682 00:11:22.262 Host Read Commands: 170820 00:11:22.262 Host Write Commands: 83790 00:11:22.262 Controller Busy Time: 0 minutes 00:11:22.262 Power Cycles: 0 00:11:22.262 Power On Hours: 0 hours 00:11:22.262 Unsafe Shutdowns: 0 00:11:22.262 Unrecoverable Media Errors: 0 00:11:22.262 Lifetime Error Log Entries: 0 00:11:22.262 Warning Temperature Time: 0 minutes 00:11:22.262 Critical Temperature Time: 0 minutes 00:11:22.262 00:11:22.262 Number of Queues 00:11:22.262 ================ 00:11:22.262 Number of I/O Submission Queues: 64 00:11:22.262 Number of I/O Completion Queues: 64 00:11:22.262 00:11:22.262 ZNS Specific Controller Data 00:11:22.262 ============================ 00:11:22.262 Zone Append Size Limit: 0 00:11:22.262 00:11:22.262 00:11:22.262 Active Namespaces 00:11:22.262 ================= 00:11:22.262 Namespace ID:1 00:11:22.262 Error Recovery Timeout: Unlimited 00:11:22.262 Command Set Identifier: NVM (00h) 00:11:22.262 Deallocate: Supported 00:11:22.262 Deallocated/Unwritten Error: Supported 00:11:22.262 Deallocated Read Value: All 0x00 00:11:22.262 Deallocate in Write Zeroes: Not Supported 00:11:22.262 Deallocated Guard Field: 0xFFFF 00:11:22.262 Flush: Supported 00:11:22.262 Reservation: Not Supported 00:11:22.262 Namespace Sharing Capabilities: Private 00:11:22.262 Size (in LBAs): 1048576 (4GiB) 00:11:22.262 Capacity (in LBAs): 1048576 (4GiB) 00:11:22.262 Utilization (in LBAs): 1048576 (4GiB) 00:11:22.262 Thin Provisioning: Not Supported 00:11:22.262 Per-NS Atomic Units: No 00:11:22.262 Maximum Single Source Range Length: 128 00:11:22.262 Maximum Copy Length: 128 00:11:22.262 Maximum Source Range Count: 128 00:11:22.262 NGUID/EUI64 Never Reused: No 00:11:22.262 Namespace Write Protected: No 00:11:22.262 Number of LBA Formats: 8 00:11:22.262 Current LBA Format: LBA Format #04 00:11:22.262 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:22.262 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:22.262 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:22.262 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:22.262 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:22.262 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:22.262 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:22.262 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:22.262 00:11:22.262 Namespace ID:2 00:11:22.262 Error Recovery Timeout: Unlimited 00:11:22.262 Command Set Identifier: NVM (00h) 00:11:22.262 Deallocate: Supported 00:11:22.262 Deallocated/Unwritten Error: Supported 00:11:22.262 Deallocated Read Value: All 0x00 00:11:22.262 Deallocate in Write Zeroes: Not Supported 00:11:22.262 Deallocated Guard Field: 0xFFFF 00:11:22.262 Flush: Supported 00:11:22.262 Reservation: Not Supported 00:11:22.262 Namespace Sharing Capabilities: Private 00:11:22.262 Size (in LBAs): 1048576 (4GiB) 00:11:22.262 Capacity (in LBAs): 1048576 (4GiB) 00:11:22.262 Utilization (in LBAs): 1048576 (4GiB) 00:11:22.262 Thin Provisioning: Not Supported 00:11:22.262 Per-NS Atomic Units: No 00:11:22.262 Maximum Single Source Range Length: 128 00:11:22.262 Maximum Copy Length: 128 00:11:22.262 Maximum Source Range Count: 128 00:11:22.262 NGUID/EUI64 Never Reused: No 00:11:22.262 Namespace Write Protected: No 00:11:22.262 Number of LBA Formats: 8 00:11:22.262 Current LBA Format: LBA Format #04 00:11:22.262 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:22.262 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:22.262 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:22.262 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:22.262 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:22.262 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:22.262 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:22.262 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:22.262 00:11:22.262 Namespace ID:3 00:11:22.262 Error Recovery Timeout: Unlimited 00:11:22.262 Command Set Identifier: NVM (00h) 00:11:22.262 Deallocate: Supported 00:11:22.262 Deallocated/Unwritten Error: Supported 00:11:22.262 Deallocated Read Value: All 0x00 00:11:22.262 Deallocate in Write Zeroes: Not Supported 00:11:22.262 Deallocated Guard Field: 0xFFFF 00:11:22.262 Flush: Supported 00:11:22.262 Reservation: Not Supported 00:11:22.262 Namespace Sharing Capabilities: Private 00:11:22.262 Size (in LBAs): 1048576 (4GiB) 00:11:22.262 Capacity (in LBAs): 1048576 (4GiB) 00:11:22.262 Utilization (in LBAs): 1048576 (4GiB) 00:11:22.262 Thin Provisioning: Not Supported 00:11:22.262 Per-NS Atomic Units: No 00:11:22.262 Maximum Single Source Range Length: 128 00:11:22.262 Maximum Copy Length: 128 00:11:22.262 Maximum Source Range Count: 128 00:11:22.262 NGUID/EUI64 Never Reused: No 00:11:22.262 Namespace Write Protected: No 00:11:22.262 Number of LBA Formats: 8 00:11:22.262 Current LBA Format: LBA Format #04 00:11:22.262 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:22.262 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:22.262 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:22.262 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:22.262 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:22.262 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:22.262 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:22.262 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:22.262 00:11:22.262 15:36:43 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:22.262 15:36:43 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:11:22.520 ===================================================== 00:11:22.520 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:22.520 ===================================================== 00:11:22.520 Controller Capabilities/Features 00:11:22.520 ================================ 00:11:22.520 Vendor ID: 1b36 00:11:22.520 Subsystem Vendor ID: 1af4 00:11:22.520 Serial Number: 12343 00:11:22.520 Model Number: QEMU NVMe Ctrl 00:11:22.520 Firmware Version: 8.0.0 00:11:22.520 Recommended Arb Burst: 6 00:11:22.520 IEEE OUI Identifier: 00 54 52 00:11:22.520 Multi-path I/O 00:11:22.520 May have multiple subsystem ports: No 00:11:22.520 May have multiple controllers: Yes 00:11:22.520 Associated with SR-IOV VF: No 00:11:22.520 Max Data Transfer Size: 524288 00:11:22.520 Max Number of Namespaces: 256 00:11:22.521 Max Number of I/O Queues: 64 00:11:22.521 NVMe Specification Version (VS): 1.4 00:11:22.521 NVMe Specification Version (Identify): 1.4 00:11:22.521 Maximum Queue Entries: 2048 00:11:22.521 Contiguous Queues Required: Yes 00:11:22.521 Arbitration Mechanisms Supported 00:11:22.521 Weighted Round Robin: Not Supported 00:11:22.521 Vendor Specific: Not Supported 00:11:22.521 Reset Timeout: 7500 ms 00:11:22.521 Doorbell Stride: 4 bytes 00:11:22.521 NVM Subsystem Reset: Not Supported 00:11:22.521 Command Sets Supported 00:11:22.521 NVM Command Set: Supported 00:11:22.521 Boot Partition: Not Supported 00:11:22.521 Memory Page Size Minimum: 4096 bytes 00:11:22.521 Memory Page Size Maximum: 65536 bytes 00:11:22.521 Persistent Memory Region: Not Supported 00:11:22.521 Optional Asynchronous Events Supported 00:11:22.521 Namespace Attribute Notices: Supported 00:11:22.521 Firmware Activation Notices: Not Supported 00:11:22.521 ANA Change Notices: Not Supported 00:11:22.521 PLE Aggregate Log Change Notices: Not Supported 00:11:22.521 LBA Status Info Alert Notices: Not Supported 00:11:22.521 EGE Aggregate Log Change Notices: Not Supported 00:11:22.521 Normal NVM Subsystem Shutdown event: Not Supported 00:11:22.521 Zone Descriptor Change Notices: Not Supported 00:11:22.521 Discovery Log Change Notices: Not Supported 00:11:22.521 Controller Attributes 00:11:22.521 128-bit Host Identifier: Not Supported 00:11:22.521 Non-Operational Permissive Mode: Not Supported 00:11:22.521 NVM Sets: Not Supported 00:11:22.521 Read Recovery Levels: Not Supported 00:11:22.521 Endurance Groups: Supported 00:11:22.521 Predictable Latency Mode: Not Supported 00:11:22.521 Traffic Based Keep ALive: Not Supported 00:11:22.521 Namespace Granularity: Not Supported 00:11:22.521 SQ Associations: Not Supported 00:11:22.521 UUID List: Not Supported 00:11:22.521 Multi-Domain Subsystem: Not Supported 00:11:22.521 Fixed Capacity Management: Not Supported 00:11:22.521 Variable Capacity Management: Not Supported 00:11:22.521 Delete Endurance Group: Not Supported 00:11:22.521 Delete NVM Set: Not Supported 00:11:22.521 Extended LBA Formats Supported: Supported 00:11:22.521 Flexible Data Placement Supported: Supported 00:11:22.521 00:11:22.521 Controller Memory Buffer Support 00:11:22.521 ================================ 00:11:22.521 Supported: No 00:11:22.521 00:11:22.521 Persistent Memory Region Support 00:11:22.521 ================================ 00:11:22.521 Supported: No 00:11:22.521 00:11:22.521 Admin Command Set Attributes 00:11:22.521 ============================ 00:11:22.521 Security Send/Receive: Not Supported 00:11:22.521 Format NVM: Supported 00:11:22.521 Firmware Activate/Download: Not Supported 00:11:22.521 Namespace Management: Supported 00:11:22.521 Device Self-Test: Not Supported 00:11:22.521 Directives: Supported 00:11:22.521 NVMe-MI: Not Supported 00:11:22.521 Virtualization Management: Not Supported 00:11:22.521 Doorbell Buffer Config: Supported 00:11:22.521 Get LBA Status Capability: Not Supported 00:11:22.521 Command & Feature Lockdown Capability: Not Supported 00:11:22.521 Abort Command Limit: 4 00:11:22.521 Async Event Request Limit: 4 00:11:22.521 Number of Firmware Slots: N/A 00:11:22.521 Firmware Slot 1 Read-Only: N/A 00:11:22.521 Firmware Activation Without Reset: N/A 00:11:22.521 Multiple Update Detection Support: N/A 00:11:22.521 Firmware Update Granularity: No Information Provided 00:11:22.521 Per-Namespace SMART Log: Yes 00:11:22.521 Asymmetric Namespace Access Log Page: Not Supported 00:11:22.521 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:22.521 Command Effects Log Page: Supported 00:11:22.521 Get Log Page Extended Data: Supported 00:11:22.521 Telemetry Log Pages: Not Supported 00:11:22.521 Persistent Event Log Pages: Not Supported 00:11:22.521 Supported Log Pages Log Page: May Support 00:11:22.521 Commands Supported & Effects Log Page: Not Supported 00:11:22.521 Feature Identifiers & Effects Log Page:May Support 00:11:22.521 NVMe-MI Commands & Effects Log Page: May Support 00:11:22.521 Data Area 4 for Telemetry Log: Not Supported 00:11:22.521 Error Log Page Entries Supported: 1 00:11:22.521 Keep Alive: Not Supported 00:11:22.521 00:11:22.521 NVM Command Set Attributes 00:11:22.521 ========================== 00:11:22.521 Submission Queue Entry Size 00:11:22.521 Max: 64 00:11:22.521 Min: 64 00:11:22.521 Completion Queue Entry Size 00:11:22.521 Max: 16 00:11:22.521 Min: 16 00:11:22.521 Number of Namespaces: 256 00:11:22.521 Compare Command: Supported 00:11:22.521 Write Uncorrectable Command: Not Supported 00:11:22.521 Dataset Management Command: Supported 00:11:22.521 Write Zeroes Command: Supported 00:11:22.521 Set Features Save Field: Supported 00:11:22.521 Reservations: Not Supported 00:11:22.521 Timestamp: Supported 00:11:22.521 Copy: Supported 00:11:22.521 Volatile Write Cache: Present 00:11:22.521 Atomic Write Unit (Normal): 1 00:11:22.521 Atomic Write Unit (PFail): 1 00:11:22.521 Atomic Compare & Write Unit: 1 00:11:22.521 Fused Compare & Write: Not Supported 00:11:22.521 Scatter-Gather List 00:11:22.521 SGL Command Set: Supported 00:11:22.521 SGL Keyed: Not Supported 00:11:22.521 SGL Bit Bucket Descriptor: Not Supported 00:11:22.521 SGL Metadata Pointer: Not Supported 00:11:22.521 Oversized SGL: Not Supported 00:11:22.521 SGL Metadata Address: Not Supported 00:11:22.521 SGL Offset: Not Supported 00:11:22.521 Transport SGL Data Block: Not Supported 00:11:22.521 Replay Protected Memory Block: Not Supported 00:11:22.521 00:11:22.521 Firmware Slot Information 00:11:22.521 ========================= 00:11:22.521 Active slot: 1 00:11:22.521 Slot 1 Firmware Revision: 1.0 00:11:22.521 00:11:22.521 00:11:22.521 Commands Supported and Effects 00:11:22.521 ============================== 00:11:22.521 Admin Commands 00:11:22.521 -------------- 00:11:22.521 Delete I/O Submission Queue (00h): Supported 00:11:22.521 Create I/O Submission Queue (01h): Supported 00:11:22.521 Get Log Page (02h): Supported 00:11:22.521 Delete I/O Completion Queue (04h): Supported 00:11:22.521 Create I/O Completion Queue (05h): Supported 00:11:22.521 Identify (06h): Supported 00:11:22.521 Abort (08h): Supported 00:11:22.521 Set Features (09h): Supported 00:11:22.521 Get Features (0Ah): Supported 00:11:22.521 Asynchronous Event Request (0Ch): Supported 00:11:22.521 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:22.521 Directive Send (19h): Supported 00:11:22.521 Directive Receive (1Ah): Supported 00:11:22.521 Virtualization Management (1Ch): Supported 00:11:22.521 Doorbell Buffer Config (7Ch): Supported 00:11:22.521 Format NVM (80h): Supported LBA-Change 00:11:22.521 I/O Commands 00:11:22.521 ------------ 00:11:22.521 Flush (00h): Supported LBA-Change 00:11:22.521 Write (01h): Supported LBA-Change 00:11:22.521 Read (02h): Supported 00:11:22.521 Compare (05h): Supported 00:11:22.521 Write Zeroes (08h): Supported LBA-Change 00:11:22.521 Dataset Management (09h): Supported LBA-Change 00:11:22.521 Unknown (0Ch): Supported 00:11:22.521 Unknown (12h): Supported 00:11:22.521 Copy (19h): Supported LBA-Change 00:11:22.521 Unknown (1Dh): Supported LBA-Change 00:11:22.521 00:11:22.521 Error Log 00:11:22.521 ========= 00:11:22.521 00:11:22.521 Arbitration 00:11:22.521 =========== 00:11:22.521 Arbitration Burst: no limit 00:11:22.521 00:11:22.521 Power Management 00:11:22.521 ================ 00:11:22.521 Number of Power States: 1 00:11:22.521 Current Power State: Power State #0 00:11:22.521 Power State #0: 00:11:22.521 Max Power: 25.00 W 00:11:22.521 Non-Operational State: Operational 00:11:22.521 Entry Latency: 16 microseconds 00:11:22.521 Exit Latency: 4 microseconds 00:11:22.521 Relative Read Throughput: 0 00:11:22.521 Relative Read Latency: 0 00:11:22.521 Relative Write Throughput: 0 00:11:22.521 Relative Write Latency: 0 00:11:22.521 Idle Power: Not Reported 00:11:22.521 Active Power: Not Reported 00:11:22.521 Non-Operational Permissive Mode: Not Supported 00:11:22.521 00:11:22.521 Health Information 00:11:22.521 ================== 00:11:22.521 Critical Warnings: 00:11:22.521 Available Spare Space: OK 00:11:22.521 Temperature: OK 00:11:22.521 Device Reliability: OK 00:11:22.521 Read Only: No 00:11:22.521 Volatile Memory Backup: OK 00:11:22.521 Current Temperature: 323 Kelvin (50 Celsius) 00:11:22.521 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:22.521 Available Spare: 0% 00:11:22.521 Available Spare Threshold: 0% 00:11:22.521 Life Percentage Used: 0% 00:11:22.521 Data Units Read: 1246 00:11:22.522 Data Units Written: 576 00:11:22.522 Host Read Commands: 57096 00:11:22.522 Host Write Commands: 28039 00:11:22.522 Controller Busy Time: 0 minutes 00:11:22.522 Power Cycles: 0 00:11:22.522 Power On Hours: 0 hours 00:11:22.522 Unsafe Shutdowns: 0 00:11:22.522 Unrecoverable Media Errors: 0 00:11:22.522 Lifetime Error Log Entries: 0 00:11:22.522 Warning Temperature Time: 0 minutes 00:11:22.522 Critical Temperature Time: 0 minutes 00:11:22.522 00:11:22.522 Number of Queues 00:11:22.522 ================ 00:11:22.522 Number of I/O Submission Queues: 64 00:11:22.522 Number of I/O Completion Queues: 64 00:11:22.522 00:11:22.522 ZNS Specific Controller Data 00:11:22.522 ============================ 00:11:22.522 Zone Append Size Limit: 0 00:11:22.522 00:11:22.522 00:11:22.522 Active Namespaces 00:11:22.522 ================= 00:11:22.522 Namespace ID:1 00:11:22.522 Error Recovery Timeout: Unlimited 00:11:22.522 Command Set Identifier: NVM (00h) 00:11:22.522 Deallocate: Supported 00:11:22.522 Deallocated/Unwritten Error: Supported 00:11:22.522 Deallocated Read Value: All 0x00 00:11:22.522 Deallocate in Write Zeroes: Not Supported 00:11:22.522 Deallocated Guard Field: 0xFFFF 00:11:22.522 Flush: Supported 00:11:22.522 Reservation: Not Supported 00:11:22.522 Namespace Sharing Capabilities: Multiple Controllers 00:11:22.522 Size (in LBAs): 262144 (1GiB) 00:11:22.522 Capacity (in LBAs): 262144 (1GiB) 00:11:22.522 Utilization (in LBAs): 262144 (1GiB) 00:11:22.522 Thin Provisioning: Not Supported 00:11:22.522 Per-NS Atomic Units: No 00:11:22.522 Maximum Single Source Range Length: 128 00:11:22.522 Maximum Copy Length: 128 00:11:22.522 Maximum Source Range Count: 128 00:11:22.522 NGUID/EUI64 Never Reused: No 00:11:22.522 Namespace Write Protected: No 00:11:22.522 Endurance group ID: 1 00:11:22.522 Number of LBA Formats: 8 00:11:22.522 Current LBA Format: LBA Format #04 00:11:22.522 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:22.522 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:22.522 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:22.522 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:22.522 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:22.522 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:22.522 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:22.522 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:22.522 00:11:22.522 Get Feature FDP: 00:11:22.522 ================ 00:11:22.522 Enabled: Yes 00:11:22.522 FDP configuration index: 0 00:11:22.522 00:11:22.522 FDP configurations log page 00:11:22.522 =========================== 00:11:22.522 Number of FDP configurations: 1 00:11:22.522 Version: 0 00:11:22.522 Size: 112 00:11:22.522 FDP Configuration Descriptor: 0 00:11:22.522 Descriptor Size: 96 00:11:22.522 Reclaim Group Identifier format: 2 00:11:22.522 FDP Volatile Write Cache: Not Present 00:11:22.522 FDP Configuration: Valid 00:11:22.522 Vendor Specific Size: 0 00:11:22.522 Number of Reclaim Groups: 2 00:11:22.522 Number of Recalim Unit Handles: 8 00:11:22.522 Max Placement Identifiers: 128 00:11:22.522 Number of Namespaces Suppprted: 256 00:11:22.522 Reclaim unit Nominal Size: 6000000 bytes 00:11:22.522 Estimated Reclaim Unit Time Limit: Not Reported 00:11:22.522 RUH Desc #000: RUH Type: Initially Isolated 00:11:22.522 RUH Desc #001: RUH Type: Initially Isolated 00:11:22.522 RUH Desc #002: RUH Type: Initially Isolated 00:11:22.522 RUH Desc #003: RUH Type: Initially Isolated 00:11:22.522 RUH Desc #004: RUH Type: Initially Isolated 00:11:22.522 RUH Desc #005: RUH Type: Initially Isolated 00:11:22.522 RUH Desc #006: RUH Type: Initially Isolated 00:11:22.522 RUH Desc #007: RUH Type: Initially Isolated 00:11:22.522 00:11:22.522 FDP reclaim unit handle usage log page 00:11:22.522 ====================================== 00:11:22.522 Number of Reclaim Unit Handles: 8 00:11:22.522 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:22.522 RUH Usage Desc #001: RUH Attributes: Unused 00:11:22.522 RUH Usage Desc #002: RUH Attributes: Unused 00:11:22.522 RUH Usage Desc #003: RUH Attributes: Unused 00:11:22.522 RUH Usage Desc #004: RUH Attributes: Unused 00:11:22.522 RUH Usage Desc #005: RUH Attributes: Unused 00:11:22.522 RUH Usage Desc #006: RUH Attributes: Unused 00:11:22.522 RUH Usage Desc #007: RUH Attributes: Unused 00:11:22.522 00:11:22.522 FDP statistics log page 00:11:22.522 ======================= 00:11:22.522 Host bytes with metadata written: 374603776 00:11:22.522 Media bytes with metadata written: 374734848 00:11:22.522 Media bytes erased: 0 00:11:22.522 00:11:22.522 FDP events log page 00:11:22.522 =================== 00:11:22.522 Number of FDP events: 0 00:11:22.522 00:11:22.522 00:11:22.522 real 0m1.662s 00:11:22.522 user 0m0.666s 00:11:22.522 sys 0m0.796s 00:11:22.522 15:36:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.522 15:36:43 -- common/autotest_common.sh@10 -- # set +x 00:11:22.522 ************************************ 00:11:22.522 END TEST nvme_identify 00:11:22.522 ************************************ 00:11:22.522 15:36:44 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:22.522 15:36:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:22.522 15:36:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:22.522 15:36:44 -- common/autotest_common.sh@10 -- # set +x 00:11:22.522 ************************************ 00:11:22.522 START TEST nvme_perf 00:11:22.522 ************************************ 00:11:22.522 15:36:44 -- common/autotest_common.sh@1104 -- # nvme_perf 00:11:22.522 15:36:44 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:23.907 Initializing NVMe Controllers 00:11:23.907 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:23.907 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:23.907 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:23.907 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:23.907 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:23.907 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:11:23.907 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:11:23.907 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:11:23.907 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:11:23.907 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:11:23.907 Initialization complete. Launching workers. 00:11:23.907 ======================================================== 00:11:23.907 Latency(us) 00:11:23.907 Device Information : IOPS MiB/s Average min max 00:11:23.907 PCIE (0000:00:06.0) NSID 1 from core 0: 13263.42 155.43 9644.72 6709.44 34373.53 00:11:23.907 PCIE (0000:00:07.0) NSID 1 from core 0: 13263.42 155.43 9635.30 6955.34 33332.22 00:11:23.907 PCIE (0000:00:09.0) NSID 1 from core 0: 13263.42 155.43 9623.57 6927.30 32241.34 00:11:23.907 PCIE (0000:00:08.0) NSID 1 from core 0: 13263.42 155.43 9610.93 7008.13 30460.63 00:11:23.907 PCIE (0000:00:08.0) NSID 2 from core 0: 13263.42 155.43 9598.59 7017.63 28890.35 00:11:23.907 PCIE (0000:00:08.0) NSID 3 from core 0: 13263.42 155.43 9586.16 6962.97 27104.54 00:11:23.907 ======================================================== 00:11:23.907 Total : 79580.50 932.58 9616.54 6709.44 34373.53 00:11:23.907 00:11:23.907 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:23.907 ================================================================================= 00:11:23.907 1.00000% : 7208.960us 00:11:23.907 10.00000% : 8221.789us 00:11:23.907 25.00000% : 8698.415us 00:11:23.907 50.00000% : 9413.353us 00:11:23.907 75.00000% : 10128.291us 00:11:23.907 90.00000% : 10843.229us 00:11:23.907 95.00000% : 11439.011us 00:11:23.907 98.00000% : 12213.527us 00:11:23.907 99.00000% : 13047.622us 00:11:23.907 99.50000% : 32172.218us 00:11:23.907 99.90000% : 34078.720us 00:11:23.907 99.99000% : 34317.033us 00:11:23.907 99.99900% : 34555.345us 00:11:23.907 99.99990% : 34555.345us 00:11:23.907 99.99999% : 34555.345us 00:11:23.907 00:11:23.907 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:11:23.907 ================================================================================= 00:11:23.907 1.00000% : 7387.695us 00:11:23.907 10.00000% : 8340.945us 00:11:23.907 25.00000% : 8757.993us 00:11:23.907 50.00000% : 9413.353us 00:11:23.907 75.00000% : 10068.713us 00:11:23.907 90.00000% : 10843.229us 00:11:23.907 95.00000% : 11319.855us 00:11:23.907 98.00000% : 12034.793us 00:11:23.907 99.00000% : 12868.887us 00:11:23.907 99.50000% : 30742.342us 00:11:23.907 99.90000% : 32887.156us 00:11:23.907 99.99000% : 33363.782us 00:11:23.907 99.99900% : 33363.782us 00:11:23.907 99.99990% : 33363.782us 00:11:23.907 99.99999% : 33363.782us 00:11:23.907 00:11:23.907 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:11:23.907 ================================================================================= 00:11:23.907 1.00000% : 7357.905us 00:11:23.907 10.00000% : 8281.367us 00:11:23.907 25.00000% : 8757.993us 00:11:23.907 50.00000% : 9413.353us 00:11:23.907 75.00000% : 10068.713us 00:11:23.907 90.00000% : 10783.651us 00:11:23.907 95.00000% : 11260.276us 00:11:23.907 98.00000% : 11915.636us 00:11:23.907 99.00000% : 12570.996us 00:11:23.907 99.50000% : 29789.091us 00:11:23.907 99.90000% : 31933.905us 00:11:23.907 99.99000% : 32410.531us 00:11:23.907 99.99900% : 32410.531us 00:11:23.907 99.99990% : 32410.531us 00:11:23.907 99.99999% : 32410.531us 00:11:23.907 00:11:23.907 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:11:23.907 ================================================================================= 00:11:23.907 1.00000% : 7357.905us 00:11:23.907 10.00000% : 8281.367us 00:11:23.907 25.00000% : 8757.993us 00:11:23.907 50.00000% : 9413.353us 00:11:23.907 75.00000% : 10068.713us 00:11:23.907 90.00000% : 10783.651us 00:11:23.907 95.00000% : 11379.433us 00:11:23.907 98.00000% : 12034.793us 00:11:23.907 99.00000% : 12928.465us 00:11:23.907 99.50000% : 28001.745us 00:11:23.907 99.90000% : 30027.404us 00:11:23.907 99.99000% : 30504.029us 00:11:23.907 99.99900% : 30504.029us 00:11:23.907 99.99990% : 30504.029us 00:11:23.907 99.99999% : 30504.029us 00:11:23.907 00:11:23.907 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:11:23.907 ================================================================================= 00:11:23.907 1.00000% : 7387.695us 00:11:23.907 10.00000% : 8340.945us 00:11:23.907 25.00000% : 8757.993us 00:11:23.907 50.00000% : 9413.353us 00:11:23.907 75.00000% : 10068.713us 00:11:23.907 90.00000% : 10783.651us 00:11:23.907 95.00000% : 11498.589us 00:11:23.907 98.00000% : 12332.684us 00:11:23.907 99.00000% : 13107.200us 00:11:23.907 99.50000% : 26452.713us 00:11:23.907 99.90000% : 28478.371us 00:11:23.907 99.99000% : 28954.996us 00:11:23.907 99.99900% : 28954.996us 00:11:23.907 99.99990% : 28954.996us 00:11:23.907 99.99999% : 28954.996us 00:11:23.907 00:11:23.907 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:11:23.907 ================================================================================= 00:11:23.907 1.00000% : 7387.695us 00:11:23.907 10.00000% : 8340.945us 00:11:23.907 25.00000% : 8757.993us 00:11:23.907 50.00000% : 9413.353us 00:11:23.907 75.00000% : 10068.713us 00:11:23.907 90.00000% : 10843.229us 00:11:23.907 95.00000% : 11498.589us 00:11:23.907 98.00000% : 12332.684us 00:11:23.907 99.00000% : 13762.560us 00:11:23.907 99.50000% : 24665.367us 00:11:23.907 99.90000% : 26691.025us 00:11:23.907 99.99000% : 27167.651us 00:11:23.907 99.99900% : 27167.651us 00:11:23.907 99.99990% : 27167.651us 00:11:23.907 99.99999% : 27167.651us 00:11:23.907 00:11:23.907 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:23.907 ============================================================================== 00:11:23.907 Range in us Cumulative IO count 00:11:23.907 6702.545 - 6732.335: 0.0075% ( 1) 00:11:23.907 6732.335 - 6762.124: 0.0150% ( 1) 00:11:23.907 6762.124 - 6791.913: 0.0376% ( 3) 00:11:23.907 6791.913 - 6821.702: 0.0526% ( 2) 00:11:23.907 6821.702 - 6851.491: 0.0676% ( 2) 00:11:23.907 6851.491 - 6881.280: 0.1127% ( 6) 00:11:23.907 6881.280 - 6911.069: 0.1352% ( 3) 00:11:23.908 6911.069 - 6940.858: 0.2028% ( 9) 00:11:23.908 6940.858 - 6970.647: 0.2629% ( 8) 00:11:23.908 6970.647 - 7000.436: 0.3155% ( 7) 00:11:23.908 7000.436 - 7030.225: 0.3831% ( 9) 00:11:23.908 7030.225 - 7060.015: 0.4808% ( 13) 00:11:23.908 7060.015 - 7089.804: 0.5784% ( 13) 00:11:23.908 7089.804 - 7119.593: 0.7061% ( 17) 00:11:23.908 7119.593 - 7149.382: 0.8188% ( 15) 00:11:23.908 7149.382 - 7179.171: 0.9465% ( 17) 00:11:23.908 7179.171 - 7208.960: 1.0742% ( 17) 00:11:23.908 7208.960 - 7238.749: 1.1794% ( 14) 00:11:23.908 7238.749 - 7268.538: 1.3146% ( 18) 00:11:23.908 7268.538 - 7298.327: 1.3972% ( 11) 00:11:23.908 7298.327 - 7328.116: 1.5249% ( 17) 00:11:23.908 7328.116 - 7357.905: 1.6677% ( 19) 00:11:23.908 7357.905 - 7387.695: 1.8104% ( 19) 00:11:23.908 7387.695 - 7417.484: 1.9456% ( 18) 00:11:23.908 7417.484 - 7447.273: 2.0959% ( 20) 00:11:23.908 7447.273 - 7477.062: 2.2010% ( 14) 00:11:23.908 7477.062 - 7506.851: 2.3663% ( 22) 00:11:23.908 7506.851 - 7536.640: 2.4790% ( 15) 00:11:23.908 7536.640 - 7566.429: 2.6668% ( 25) 00:11:23.908 7566.429 - 7596.218: 2.7794% ( 15) 00:11:23.908 7596.218 - 7626.007: 2.9297% ( 20) 00:11:23.908 7626.007 - 7685.585: 3.2001% ( 36) 00:11:23.908 7685.585 - 7745.164: 3.5907% ( 52) 00:11:23.908 7745.164 - 7804.742: 4.1016% ( 68) 00:11:23.908 7804.742 - 7864.320: 4.7326% ( 84) 00:11:23.908 7864.320 - 7923.898: 5.5514% ( 109) 00:11:23.908 7923.898 - 7983.476: 6.4678% ( 122) 00:11:23.908 7983.476 - 8043.055: 7.4519% ( 131) 00:11:23.908 8043.055 - 8102.633: 8.6313% ( 157) 00:11:23.908 8102.633 - 8162.211: 9.8107% ( 157) 00:11:23.908 8162.211 - 8221.789: 11.2755% ( 195) 00:11:23.908 8221.789 - 8281.367: 12.9056% ( 217) 00:11:23.908 8281.367 - 8340.945: 14.7160% ( 241) 00:11:23.908 8340.945 - 8400.524: 16.4739% ( 234) 00:11:23.908 8400.524 - 8460.102: 18.2918% ( 242) 00:11:23.908 8460.102 - 8519.680: 20.2524% ( 261) 00:11:23.908 8519.680 - 8579.258: 22.1980% ( 259) 00:11:23.908 8579.258 - 8638.836: 24.1436% ( 259) 00:11:23.908 8638.836 - 8698.415: 26.0517% ( 254) 00:11:23.908 8698.415 - 8757.993: 28.0123% ( 261) 00:11:23.908 8757.993 - 8817.571: 29.9579% ( 259) 00:11:23.908 8817.571 - 8877.149: 31.8585% ( 253) 00:11:23.908 8877.149 - 8936.727: 33.9168% ( 274) 00:11:23.908 8936.727 - 8996.305: 35.9075% ( 265) 00:11:23.908 8996.305 - 9055.884: 37.9432% ( 271) 00:11:23.908 9055.884 - 9115.462: 40.0090% ( 275) 00:11:23.908 9115.462 - 9175.040: 42.1199% ( 281) 00:11:23.908 9175.040 - 9234.618: 44.1632% ( 272) 00:11:23.908 9234.618 - 9294.196: 46.2891% ( 283) 00:11:23.908 9294.196 - 9353.775: 48.4901% ( 293) 00:11:23.908 9353.775 - 9413.353: 50.6310% ( 285) 00:11:23.908 9413.353 - 9472.931: 52.7870% ( 287) 00:11:23.908 9472.931 - 9532.509: 54.8828% ( 279) 00:11:23.908 9532.509 - 9592.087: 57.0312% ( 286) 00:11:23.908 9592.087 - 9651.665: 59.1572% ( 283) 00:11:23.908 9651.665 - 9711.244: 61.2230% ( 275) 00:11:23.908 9711.244 - 9770.822: 63.3413% ( 282) 00:11:23.908 9770.822 - 9830.400: 65.4147% ( 276) 00:11:23.908 9830.400 - 9889.978: 67.4053% ( 265) 00:11:23.908 9889.978 - 9949.556: 69.4486% ( 272) 00:11:23.908 9949.556 - 10009.135: 71.4468% ( 266) 00:11:23.908 10009.135 - 10068.713: 73.4901% ( 272) 00:11:23.908 10068.713 - 10128.291: 75.4357% ( 259) 00:11:23.908 10128.291 - 10187.869: 77.2761% ( 245) 00:11:23.908 10187.869 - 10247.447: 79.2142% ( 258) 00:11:23.908 10247.447 - 10307.025: 81.0171% ( 240) 00:11:23.908 10307.025 - 10366.604: 82.6397% ( 216) 00:11:23.908 10366.604 - 10426.182: 83.9919% ( 180) 00:11:23.908 10426.182 - 10485.760: 85.1863% ( 159) 00:11:23.908 10485.760 - 10545.338: 86.1103% ( 123) 00:11:23.908 10545.338 - 10604.916: 87.0868% ( 130) 00:11:23.908 10604.916 - 10664.495: 87.8831% ( 106) 00:11:23.908 10664.495 - 10724.073: 88.7019% ( 109) 00:11:23.908 10724.073 - 10783.651: 89.4081% ( 94) 00:11:23.908 10783.651 - 10843.229: 90.0616% ( 87) 00:11:23.908 10843.229 - 10902.807: 90.7151% ( 87) 00:11:23.908 10902.807 - 10962.385: 91.3011% ( 78) 00:11:23.908 10962.385 - 11021.964: 91.9396% ( 85) 00:11:23.908 11021.964 - 11081.542: 92.4654% ( 70) 00:11:23.908 11081.542 - 11141.120: 92.9688% ( 67) 00:11:23.908 11141.120 - 11200.698: 93.4420% ( 63) 00:11:23.908 11200.698 - 11260.276: 93.8702% ( 57) 00:11:23.908 11260.276 - 11319.855: 94.2834% ( 55) 00:11:23.908 11319.855 - 11379.433: 94.7266% ( 59) 00:11:23.908 11379.433 - 11439.011: 95.0496% ( 43) 00:11:23.908 11439.011 - 11498.589: 95.4026% ( 47) 00:11:23.908 11498.589 - 11558.167: 95.7106% ( 41) 00:11:23.908 11558.167 - 11617.745: 96.0412% ( 44) 00:11:23.908 11617.745 - 11677.324: 96.3492% ( 41) 00:11:23.908 11677.324 - 11736.902: 96.5820% ( 31) 00:11:23.908 11736.902 - 11796.480: 96.7773% ( 26) 00:11:23.908 11796.480 - 11856.058: 96.9952% ( 29) 00:11:23.908 11856.058 - 11915.636: 97.2055% ( 28) 00:11:23.908 11915.636 - 11975.215: 97.3858% ( 24) 00:11:23.908 11975.215 - 12034.793: 97.5661% ( 24) 00:11:23.908 12034.793 - 12094.371: 97.7314% ( 22) 00:11:23.908 12094.371 - 12153.949: 97.9117% ( 24) 00:11:23.908 12153.949 - 12213.527: 98.0619% ( 20) 00:11:23.908 12213.527 - 12273.105: 98.1746% ( 15) 00:11:23.908 12273.105 - 12332.684: 98.2873% ( 15) 00:11:23.908 12332.684 - 12392.262: 98.3398% ( 7) 00:11:23.908 12392.262 - 12451.840: 98.4150% ( 10) 00:11:23.908 12451.840 - 12511.418: 98.4901% ( 10) 00:11:23.908 12511.418 - 12570.996: 98.5427% ( 7) 00:11:23.908 12570.996 - 12630.575: 98.6178% ( 10) 00:11:23.908 12630.575 - 12690.153: 98.6854% ( 9) 00:11:23.908 12690.153 - 12749.731: 98.7831% ( 13) 00:11:23.908 12749.731 - 12809.309: 98.8206% ( 5) 00:11:23.908 12809.309 - 12868.887: 98.8657% ( 6) 00:11:23.908 12868.887 - 12928.465: 98.9183% ( 7) 00:11:23.908 12928.465 - 12988.044: 98.9633% ( 6) 00:11:23.908 12988.044 - 13047.622: 99.0009% ( 5) 00:11:23.908 13047.622 - 13107.200: 99.0159% ( 2) 00:11:23.908 13107.200 - 13166.778: 99.0234% ( 1) 00:11:23.908 13166.778 - 13226.356: 99.0385% ( 2) 00:11:23.908 29669.935 - 29789.091: 99.0610% ( 3) 00:11:23.908 29789.091 - 29908.247: 99.0760% ( 2) 00:11:23.908 29908.247 - 30027.404: 99.0986% ( 3) 00:11:23.908 30027.404 - 30146.560: 99.1211% ( 3) 00:11:23.908 30146.560 - 30265.716: 99.1436% ( 3) 00:11:23.908 30265.716 - 30384.873: 99.1737% ( 4) 00:11:23.908 30384.873 - 30504.029: 99.1962% ( 3) 00:11:23.908 30504.029 - 30742.342: 99.2413% ( 6) 00:11:23.908 30742.342 - 30980.655: 99.2864% ( 6) 00:11:23.908 30980.655 - 31218.967: 99.3389% ( 7) 00:11:23.908 31218.967 - 31457.280: 99.3915% ( 7) 00:11:23.908 31457.280 - 31695.593: 99.4366% ( 6) 00:11:23.908 31695.593 - 31933.905: 99.4817% ( 6) 00:11:23.908 31933.905 - 32172.218: 99.5267% ( 6) 00:11:23.908 32172.218 - 32410.531: 99.5868% ( 8) 00:11:23.908 32410.531 - 32648.844: 99.6319% ( 6) 00:11:23.908 32648.844 - 32887.156: 99.6845% ( 7) 00:11:23.908 32887.156 - 33125.469: 99.7446% ( 8) 00:11:23.908 33125.469 - 33363.782: 99.7897% ( 6) 00:11:23.908 33363.782 - 33602.095: 99.8347% ( 6) 00:11:23.908 33602.095 - 33840.407: 99.8948% ( 8) 00:11:23.908 33840.407 - 34078.720: 99.9399% ( 6) 00:11:23.908 34078.720 - 34317.033: 99.9925% ( 7) 00:11:23.908 34317.033 - 34555.345: 100.0000% ( 1) 00:11:23.908 00:11:23.908 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:11:23.908 ============================================================================== 00:11:23.908 Range in us Cumulative IO count 00:11:23.908 6940.858 - 6970.647: 0.0075% ( 1) 00:11:23.908 6970.647 - 7000.436: 0.0300% ( 3) 00:11:23.908 7000.436 - 7030.225: 0.0601% ( 4) 00:11:23.908 7030.225 - 7060.015: 0.1052% ( 6) 00:11:23.908 7060.015 - 7089.804: 0.1427% ( 5) 00:11:23.908 7089.804 - 7119.593: 0.1803% ( 5) 00:11:23.908 7119.593 - 7149.382: 0.2329% ( 7) 00:11:23.908 7149.382 - 7179.171: 0.2855% ( 7) 00:11:23.908 7179.171 - 7208.960: 0.3606% ( 10) 00:11:23.908 7208.960 - 7238.749: 0.4507% ( 12) 00:11:23.908 7238.749 - 7268.538: 0.5334% ( 11) 00:11:23.908 7268.538 - 7298.327: 0.6535% ( 16) 00:11:23.908 7298.327 - 7328.116: 0.7963% ( 19) 00:11:23.908 7328.116 - 7357.905: 0.9240% ( 17) 00:11:23.908 7357.905 - 7387.695: 1.0667% ( 19) 00:11:23.908 7387.695 - 7417.484: 1.2019% ( 18) 00:11:23.908 7417.484 - 7447.273: 1.3447% ( 19) 00:11:23.908 7447.273 - 7477.062: 1.4799% ( 18) 00:11:23.908 7477.062 - 7506.851: 1.6451% ( 22) 00:11:23.908 7506.851 - 7536.640: 1.7954% ( 20) 00:11:23.908 7536.640 - 7566.429: 1.9381% ( 19) 00:11:23.908 7566.429 - 7596.218: 2.0959% ( 21) 00:11:23.908 7596.218 - 7626.007: 2.2311% ( 18) 00:11:23.908 7626.007 - 7685.585: 2.5916% ( 48) 00:11:23.908 7685.585 - 7745.164: 2.9447% ( 47) 00:11:23.908 7745.164 - 7804.742: 3.3128% ( 49) 00:11:23.908 7804.742 - 7864.320: 3.7335% ( 56) 00:11:23.908 7864.320 - 7923.898: 4.1466% ( 55) 00:11:23.908 7923.898 - 7983.476: 4.7100% ( 75) 00:11:23.908 7983.476 - 8043.055: 5.3786% ( 89) 00:11:23.908 8043.055 - 8102.633: 6.2575% ( 117) 00:11:23.908 8102.633 - 8162.211: 7.1965% ( 125) 00:11:23.908 8162.211 - 8221.789: 8.3909% ( 159) 00:11:23.908 8221.789 - 8281.367: 9.6980% ( 174) 00:11:23.908 8281.367 - 8340.945: 11.1854% ( 198) 00:11:23.908 8340.945 - 8400.524: 12.8456% ( 221) 00:11:23.908 8400.524 - 8460.102: 14.7085% ( 248) 00:11:23.908 8460.102 - 8519.680: 16.6692% ( 261) 00:11:23.908 8519.680 - 8579.258: 18.6824% ( 268) 00:11:23.908 8579.258 - 8638.836: 20.7933% ( 281) 00:11:23.908 8638.836 - 8698.415: 22.9041% ( 281) 00:11:23.908 8698.415 - 8757.993: 25.1502% ( 299) 00:11:23.908 8757.993 - 8817.571: 27.4339% ( 304) 00:11:23.908 8817.571 - 8877.149: 29.7701% ( 311) 00:11:23.908 8877.149 - 8936.727: 32.1139% ( 312) 00:11:23.908 8936.727 - 8996.305: 34.5177% ( 320) 00:11:23.908 8996.305 - 9055.884: 36.8840% ( 315) 00:11:23.908 9055.884 - 9115.462: 39.2728% ( 318) 00:11:23.908 9115.462 - 9175.040: 41.6316% ( 314) 00:11:23.908 9175.040 - 9234.618: 44.1256% ( 332) 00:11:23.908 9234.618 - 9294.196: 46.5219% ( 319) 00:11:23.908 9294.196 - 9353.775: 48.9108% ( 318) 00:11:23.908 9353.775 - 9413.353: 51.2770% ( 315) 00:11:23.908 9413.353 - 9472.931: 53.6208% ( 312) 00:11:23.908 9472.931 - 9532.509: 55.9721% ( 313) 00:11:23.908 9532.509 - 9592.087: 58.3158% ( 312) 00:11:23.908 9592.087 - 9651.665: 60.5694% ( 300) 00:11:23.908 9651.665 - 9711.244: 62.8906% ( 309) 00:11:23.908 9711.244 - 9770.822: 65.1818% ( 305) 00:11:23.908 9770.822 - 9830.400: 67.5180% ( 311) 00:11:23.908 9830.400 - 9889.978: 69.9144% ( 319) 00:11:23.908 9889.978 - 9949.556: 72.2581% ( 312) 00:11:23.908 9949.556 - 10009.135: 74.5042% ( 299) 00:11:23.908 10009.135 - 10068.713: 76.6977% ( 292) 00:11:23.908 10068.713 - 10128.291: 78.6358% ( 258) 00:11:23.908 10128.291 - 10187.869: 80.4162% ( 237) 00:11:23.908 10187.869 - 10247.447: 81.8359% ( 189) 00:11:23.908 10247.447 - 10307.025: 83.1355% ( 173) 00:11:23.908 10307.025 - 10366.604: 84.2698% ( 151) 00:11:23.908 10366.604 - 10426.182: 85.3365% ( 142) 00:11:23.908 10426.182 - 10485.760: 86.2831% ( 126) 00:11:23.908 10485.760 - 10545.338: 87.1620% ( 117) 00:11:23.908 10545.338 - 10604.916: 87.8906% ( 97) 00:11:23.908 10604.916 - 10664.495: 88.6193% ( 97) 00:11:23.908 10664.495 - 10724.073: 89.2653% ( 86) 00:11:23.908 10724.073 - 10783.651: 89.9639% ( 93) 00:11:23.908 10783.651 - 10843.229: 90.6475% ( 91) 00:11:23.908 10843.229 - 10902.807: 91.2710% ( 83) 00:11:23.908 10902.807 - 10962.385: 91.8945% ( 83) 00:11:23.908 10962.385 - 11021.964: 92.4730% ( 77) 00:11:23.908 11021.964 - 11081.542: 93.0589% ( 78) 00:11:23.908 11081.542 - 11141.120: 93.6148% ( 74) 00:11:23.908 11141.120 - 11200.698: 94.1331% ( 69) 00:11:23.908 11200.698 - 11260.276: 94.5989% ( 62) 00:11:23.908 11260.276 - 11319.855: 95.0195% ( 56) 00:11:23.908 11319.855 - 11379.433: 95.4552% ( 58) 00:11:23.908 11379.433 - 11439.011: 95.8233% ( 49) 00:11:23.908 11439.011 - 11498.589: 96.1764% ( 47) 00:11:23.908 11498.589 - 11558.167: 96.5294% ( 47) 00:11:23.908 11558.167 - 11617.745: 96.8074% ( 37) 00:11:23.908 11617.745 - 11677.324: 97.0252% ( 29) 00:11:23.908 11677.324 - 11736.902: 97.2356% ( 28) 00:11:23.908 11736.902 - 11796.480: 97.4384% ( 27) 00:11:23.908 11796.480 - 11856.058: 97.6337% ( 26) 00:11:23.908 11856.058 - 11915.636: 97.8065% ( 23) 00:11:23.908 11915.636 - 11975.215: 97.9492% ( 19) 00:11:23.908 11975.215 - 12034.793: 98.0919% ( 19) 00:11:23.908 12034.793 - 12094.371: 98.2272% ( 18) 00:11:23.908 12094.371 - 12153.949: 98.3474% ( 16) 00:11:23.908 12153.949 - 12213.527: 98.4225% ( 10) 00:11:23.908 12213.527 - 12273.105: 98.5201% ( 13) 00:11:23.908 12273.105 - 12332.684: 98.6028% ( 11) 00:11:23.908 12332.684 - 12392.262: 98.6704% ( 9) 00:11:23.908 12392.262 - 12451.840: 98.7380% ( 9) 00:11:23.908 12451.840 - 12511.418: 98.8056% ( 9) 00:11:23.908 12511.418 - 12570.996: 98.8582% ( 7) 00:11:23.908 12570.996 - 12630.575: 98.8882% ( 4) 00:11:23.908 12630.575 - 12690.153: 98.9183% ( 4) 00:11:23.908 12690.153 - 12749.731: 98.9408% ( 3) 00:11:23.908 12749.731 - 12809.309: 98.9709% ( 4) 00:11:23.908 12809.309 - 12868.887: 99.0009% ( 4) 00:11:23.908 12868.887 - 12928.465: 99.0159% ( 2) 00:11:23.908 12928.465 - 12988.044: 99.0385% ( 3) 00:11:23.908 28240.058 - 28359.215: 99.0610% ( 3) 00:11:23.908 28359.215 - 28478.371: 99.0835% ( 3) 00:11:23.908 28478.371 - 28597.527: 99.0986% ( 2) 00:11:23.908 28597.527 - 28716.684: 99.1211% ( 3) 00:11:23.908 28716.684 - 28835.840: 99.1436% ( 3) 00:11:23.908 28835.840 - 28954.996: 99.1662% ( 3) 00:11:23.908 28954.996 - 29074.153: 99.1887% ( 3) 00:11:23.908 29074.153 - 29193.309: 99.2112% ( 3) 00:11:23.908 29193.309 - 29312.465: 99.2338% ( 3) 00:11:23.908 29312.465 - 29431.622: 99.2563% ( 3) 00:11:23.908 29431.622 - 29550.778: 99.2788% ( 3) 00:11:23.908 29550.778 - 29669.935: 99.3014% ( 3) 00:11:23.908 29669.935 - 29789.091: 99.3239% ( 3) 00:11:23.908 29789.091 - 29908.247: 99.3465% ( 3) 00:11:23.908 29908.247 - 30027.404: 99.3690% ( 3) 00:11:23.908 30027.404 - 30146.560: 99.3915% ( 3) 00:11:23.908 30146.560 - 30265.716: 99.4141% ( 3) 00:11:23.908 30265.716 - 30384.873: 99.4366% ( 3) 00:11:23.908 30384.873 - 30504.029: 99.4591% ( 3) 00:11:23.908 30504.029 - 30742.342: 99.5042% ( 6) 00:11:23.908 30742.342 - 30980.655: 99.5418% ( 5) 00:11:23.908 30980.655 - 31218.967: 99.5793% ( 5) 00:11:23.908 31218.967 - 31457.280: 99.6244% ( 6) 00:11:23.908 31457.280 - 31695.593: 99.6770% ( 7) 00:11:23.909 31695.593 - 31933.905: 99.7221% ( 6) 00:11:23.909 31933.905 - 32172.218: 99.7746% ( 7) 00:11:23.909 32172.218 - 32410.531: 99.8197% ( 6) 00:11:23.909 32410.531 - 32648.844: 99.8648% ( 6) 00:11:23.909 32648.844 - 32887.156: 99.9099% ( 6) 00:11:23.909 32887.156 - 33125.469: 99.9549% ( 6) 00:11:23.909 33125.469 - 33363.782: 100.0000% ( 6) 00:11:23.909 00:11:23.909 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:11:23.909 ============================================================================== 00:11:23.909 Range in us Cumulative IO count 00:11:23.909 6911.069 - 6940.858: 0.0075% ( 1) 00:11:23.909 6940.858 - 6970.647: 0.0225% ( 2) 00:11:23.909 6970.647 - 7000.436: 0.0376% ( 2) 00:11:23.909 7000.436 - 7030.225: 0.0451% ( 1) 00:11:23.909 7030.225 - 7060.015: 0.0676% ( 3) 00:11:23.909 7060.015 - 7089.804: 0.1277% ( 8) 00:11:23.909 7089.804 - 7119.593: 0.1578% ( 4) 00:11:23.909 7119.593 - 7149.382: 0.2329% ( 10) 00:11:23.909 7149.382 - 7179.171: 0.2930% ( 8) 00:11:23.909 7179.171 - 7208.960: 0.3756% ( 11) 00:11:23.909 7208.960 - 7238.749: 0.4657% ( 12) 00:11:23.909 7238.749 - 7268.538: 0.5859% ( 16) 00:11:23.909 7268.538 - 7298.327: 0.7362% ( 20) 00:11:23.909 7298.327 - 7328.116: 0.8714% ( 18) 00:11:23.909 7328.116 - 7357.905: 1.0442% ( 23) 00:11:23.909 7357.905 - 7387.695: 1.2019% ( 21) 00:11:23.909 7387.695 - 7417.484: 1.3597% ( 21) 00:11:23.909 7417.484 - 7447.273: 1.5249% ( 22) 00:11:23.909 7447.273 - 7477.062: 1.6602% ( 18) 00:11:23.909 7477.062 - 7506.851: 1.8029% ( 19) 00:11:23.909 7506.851 - 7536.640: 1.9606% ( 21) 00:11:23.909 7536.640 - 7566.429: 2.1184% ( 21) 00:11:23.909 7566.429 - 7596.218: 2.2837% ( 22) 00:11:23.909 7596.218 - 7626.007: 2.4189% ( 18) 00:11:23.909 7626.007 - 7685.585: 2.7419% ( 43) 00:11:23.909 7685.585 - 7745.164: 3.0950% ( 47) 00:11:23.909 7745.164 - 7804.742: 3.4029% ( 41) 00:11:23.909 7804.742 - 7864.320: 3.7861% ( 51) 00:11:23.909 7864.320 - 7923.898: 4.2743% ( 65) 00:11:23.909 7923.898 - 7983.476: 4.9129% ( 85) 00:11:23.909 7983.476 - 8043.055: 5.6941% ( 104) 00:11:23.909 8043.055 - 8102.633: 6.5805% ( 118) 00:11:23.909 8102.633 - 8162.211: 7.6698% ( 145) 00:11:23.909 8162.211 - 8221.789: 8.9017% ( 164) 00:11:23.909 8221.789 - 8281.367: 10.2990% ( 186) 00:11:23.909 8281.367 - 8340.945: 11.8615% ( 208) 00:11:23.909 8340.945 - 8400.524: 13.6043% ( 232) 00:11:23.909 8400.524 - 8460.102: 15.4072% ( 240) 00:11:23.909 8460.102 - 8519.680: 17.3828% ( 263) 00:11:23.909 8519.680 - 8579.258: 19.3284% ( 259) 00:11:23.909 8579.258 - 8638.836: 21.3492% ( 269) 00:11:23.909 8638.836 - 8698.415: 23.3248% ( 263) 00:11:23.909 8698.415 - 8757.993: 25.4056% ( 277) 00:11:23.909 8757.993 - 8817.571: 27.5541% ( 286) 00:11:23.909 8817.571 - 8877.149: 29.7251% ( 289) 00:11:23.909 8877.149 - 8936.727: 31.9787% ( 300) 00:11:23.909 8936.727 - 8996.305: 34.2097% ( 297) 00:11:23.909 8996.305 - 9055.884: 36.4859% ( 303) 00:11:23.909 9055.884 - 9115.462: 38.8221% ( 311) 00:11:23.909 9115.462 - 9175.040: 41.1283% ( 307) 00:11:23.909 9175.040 - 9234.618: 43.5397% ( 321) 00:11:23.909 9234.618 - 9294.196: 45.9360% ( 319) 00:11:23.909 9294.196 - 9353.775: 48.3023% ( 315) 00:11:23.909 9353.775 - 9413.353: 50.6611% ( 314) 00:11:23.909 9413.353 - 9472.931: 53.0874% ( 323) 00:11:23.909 9472.931 - 9532.509: 55.4011% ( 308) 00:11:23.909 9532.509 - 9592.087: 57.6848% ( 304) 00:11:23.909 9592.087 - 9651.665: 59.9910% ( 307) 00:11:23.909 9651.665 - 9711.244: 62.3122% ( 309) 00:11:23.909 9711.244 - 9770.822: 64.6109% ( 306) 00:11:23.909 9770.822 - 9830.400: 67.0147% ( 320) 00:11:23.909 9830.400 - 9889.978: 69.2758% ( 301) 00:11:23.909 9889.978 - 9949.556: 71.5595% ( 304) 00:11:23.909 9949.556 - 10009.135: 73.7680% ( 294) 00:11:23.909 10009.135 - 10068.713: 75.9615% ( 292) 00:11:23.909 10068.713 - 10128.291: 77.9222% ( 261) 00:11:23.909 10128.291 - 10187.869: 79.6349% ( 228) 00:11:23.909 10187.869 - 10247.447: 81.2500% ( 215) 00:11:23.909 10247.447 - 10307.025: 82.7073% ( 194) 00:11:23.909 10307.025 - 10366.604: 83.9093% ( 160) 00:11:23.909 10366.604 - 10426.182: 84.9985% ( 145) 00:11:23.909 10426.182 - 10485.760: 86.0427% ( 139) 00:11:23.909 10485.760 - 10545.338: 86.9441% ( 120) 00:11:23.909 10545.338 - 10604.916: 87.8080% ( 115) 00:11:23.909 10604.916 - 10664.495: 88.5892% ( 104) 00:11:23.909 10664.495 - 10724.073: 89.3555% ( 102) 00:11:23.909 10724.073 - 10783.651: 90.0391% ( 91) 00:11:23.909 10783.651 - 10843.229: 90.7227% ( 91) 00:11:23.909 10843.229 - 10902.807: 91.4138% ( 92) 00:11:23.909 10902.807 - 10962.385: 92.0823% ( 89) 00:11:23.909 10962.385 - 11021.964: 92.7133% ( 84) 00:11:23.909 11021.964 - 11081.542: 93.3368% ( 83) 00:11:23.909 11081.542 - 11141.120: 93.9078% ( 76) 00:11:23.909 11141.120 - 11200.698: 94.4336% ( 70) 00:11:23.909 11200.698 - 11260.276: 95.0045% ( 76) 00:11:23.909 11260.276 - 11319.855: 95.4177% ( 55) 00:11:23.909 11319.855 - 11379.433: 95.8083% ( 52) 00:11:23.909 11379.433 - 11439.011: 96.1163% ( 41) 00:11:23.909 11439.011 - 11498.589: 96.4168% ( 40) 00:11:23.909 11498.589 - 11558.167: 96.6797% ( 35) 00:11:23.909 11558.167 - 11617.745: 96.9727% ( 39) 00:11:23.909 11617.745 - 11677.324: 97.2281% ( 34) 00:11:23.909 11677.324 - 11736.902: 97.5060% ( 37) 00:11:23.909 11736.902 - 11796.480: 97.7239% ( 29) 00:11:23.909 11796.480 - 11856.058: 97.9342% ( 28) 00:11:23.909 11856.058 - 11915.636: 98.1220% ( 25) 00:11:23.909 11915.636 - 11975.215: 98.3173% ( 26) 00:11:23.909 11975.215 - 12034.793: 98.4901% ( 23) 00:11:23.909 12034.793 - 12094.371: 98.6178% ( 17) 00:11:23.909 12094.371 - 12153.949: 98.7004% ( 11) 00:11:23.909 12153.949 - 12213.527: 98.7755% ( 10) 00:11:23.909 12213.527 - 12273.105: 98.8431% ( 9) 00:11:23.909 12273.105 - 12332.684: 98.8957% ( 7) 00:11:23.909 12332.684 - 12392.262: 98.9333% ( 5) 00:11:23.909 12392.262 - 12451.840: 98.9558% ( 3) 00:11:23.909 12451.840 - 12511.418: 98.9784% ( 3) 00:11:23.909 12511.418 - 12570.996: 99.0084% ( 4) 00:11:23.909 12570.996 - 12630.575: 99.0309% ( 3) 00:11:23.909 12630.575 - 12690.153: 99.0385% ( 1) 00:11:23.909 27286.807 - 27405.964: 99.0610% ( 3) 00:11:23.909 27405.964 - 27525.120: 99.0835% ( 3) 00:11:23.909 27525.120 - 27644.276: 99.0986% ( 2) 00:11:23.909 27644.276 - 27763.433: 99.1211% ( 3) 00:11:23.909 27763.433 - 27882.589: 99.1436% ( 3) 00:11:23.909 27882.589 - 28001.745: 99.1662% ( 3) 00:11:23.909 28001.745 - 28120.902: 99.1962% ( 4) 00:11:23.909 28120.902 - 28240.058: 99.2188% ( 3) 00:11:23.909 28240.058 - 28359.215: 99.2413% ( 3) 00:11:23.909 28359.215 - 28478.371: 99.2638% ( 3) 00:11:23.909 28478.371 - 28597.527: 99.2864% ( 3) 00:11:23.909 28597.527 - 28716.684: 99.3089% ( 3) 00:11:23.909 28716.684 - 28835.840: 99.3389% ( 4) 00:11:23.909 28835.840 - 28954.996: 99.3540% ( 2) 00:11:23.909 28954.996 - 29074.153: 99.3765% ( 3) 00:11:23.909 29074.153 - 29193.309: 99.3990% ( 3) 00:11:23.909 29193.309 - 29312.465: 99.4291% ( 4) 00:11:23.909 29312.465 - 29431.622: 99.4516% ( 3) 00:11:23.909 29431.622 - 29550.778: 99.4742% ( 3) 00:11:23.909 29550.778 - 29669.935: 99.4967% ( 3) 00:11:23.909 29669.935 - 29789.091: 99.5192% ( 3) 00:11:23.909 29789.091 - 29908.247: 99.5418% ( 3) 00:11:23.909 29908.247 - 30027.404: 99.5643% ( 3) 00:11:23.909 30027.404 - 30146.560: 99.5793% ( 2) 00:11:23.909 30146.560 - 30265.716: 99.6094% ( 4) 00:11:23.909 30265.716 - 30384.873: 99.6319% ( 3) 00:11:23.909 30384.873 - 30504.029: 99.6544% ( 3) 00:11:23.909 30504.029 - 30742.342: 99.6995% ( 6) 00:11:23.909 30742.342 - 30980.655: 99.7446% ( 6) 00:11:23.909 30980.655 - 31218.967: 99.7972% ( 7) 00:11:23.909 31218.967 - 31457.280: 99.8422% ( 6) 00:11:23.909 31457.280 - 31695.593: 99.8948% ( 7) 00:11:23.909 31695.593 - 31933.905: 99.9399% ( 6) 00:11:23.909 31933.905 - 32172.218: 99.9850% ( 6) 00:11:23.909 32172.218 - 32410.531: 100.0000% ( 2) 00:11:23.909 00:11:23.909 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:11:23.909 ============================================================================== 00:11:23.909 Range in us Cumulative IO count 00:11:23.909 7000.436 - 7030.225: 0.0150% ( 2) 00:11:23.909 7030.225 - 7060.015: 0.0300% ( 2) 00:11:23.909 7060.015 - 7089.804: 0.0601% ( 4) 00:11:23.909 7089.804 - 7119.593: 0.1052% ( 6) 00:11:23.909 7119.593 - 7149.382: 0.1653% ( 8) 00:11:23.909 7149.382 - 7179.171: 0.2329% ( 9) 00:11:23.909 7179.171 - 7208.960: 0.3080% ( 10) 00:11:23.909 7208.960 - 7238.749: 0.4132% ( 14) 00:11:23.909 7238.749 - 7268.538: 0.5559% ( 19) 00:11:23.909 7268.538 - 7298.327: 0.6986% ( 19) 00:11:23.909 7298.327 - 7328.116: 0.8714% ( 23) 00:11:23.909 7328.116 - 7357.905: 1.0517% ( 24) 00:11:23.909 7357.905 - 7387.695: 1.2094% ( 21) 00:11:23.909 7387.695 - 7417.484: 1.3897% ( 24) 00:11:23.909 7417.484 - 7447.273: 1.5475% ( 21) 00:11:23.909 7447.273 - 7477.062: 1.7127% ( 22) 00:11:23.909 7477.062 - 7506.851: 1.8930% ( 24) 00:11:23.909 7506.851 - 7536.640: 2.0658% ( 23) 00:11:23.909 7536.640 - 7566.429: 2.2160% ( 20) 00:11:23.909 7566.429 - 7596.218: 2.3963% ( 24) 00:11:23.909 7596.218 - 7626.007: 2.5691% ( 23) 00:11:23.909 7626.007 - 7685.585: 2.9072% ( 45) 00:11:23.909 7685.585 - 7745.164: 3.2677% ( 48) 00:11:23.909 7745.164 - 7804.742: 3.6133% ( 46) 00:11:23.909 7804.742 - 7864.320: 4.0039% ( 52) 00:11:23.909 7864.320 - 7923.898: 4.5147% ( 68) 00:11:23.909 7923.898 - 7983.476: 5.0856% ( 76) 00:11:23.909 7983.476 - 8043.055: 5.7392% ( 87) 00:11:23.909 8043.055 - 8102.633: 6.6556% ( 122) 00:11:23.909 8102.633 - 8162.211: 7.6923% ( 138) 00:11:23.909 8162.211 - 8221.789: 8.8567% ( 155) 00:11:23.909 8221.789 - 8281.367: 10.1713% ( 175) 00:11:23.909 8281.367 - 8340.945: 11.6887% ( 202) 00:11:23.909 8340.945 - 8400.524: 13.3789% ( 225) 00:11:23.909 8400.524 - 8460.102: 15.1442% ( 235) 00:11:23.909 8460.102 - 8519.680: 17.0448% ( 253) 00:11:23.909 8519.680 - 8579.258: 19.0505% ( 267) 00:11:23.909 8579.258 - 8638.836: 21.0938% ( 272) 00:11:23.909 8638.836 - 8698.415: 23.0995% ( 267) 00:11:23.909 8698.415 - 8757.993: 25.2855% ( 291) 00:11:23.909 8757.993 - 8817.571: 27.4715% ( 291) 00:11:23.909 8817.571 - 8877.149: 29.7025% ( 297) 00:11:23.909 8877.149 - 8936.727: 31.9561% ( 300) 00:11:23.909 8936.727 - 8996.305: 34.2698% ( 308) 00:11:23.909 8996.305 - 9055.884: 36.6662% ( 319) 00:11:23.909 9055.884 - 9115.462: 39.1076% ( 325) 00:11:23.909 9115.462 - 9175.040: 41.5264% ( 322) 00:11:23.909 9175.040 - 9234.618: 43.9829% ( 327) 00:11:23.909 9234.618 - 9294.196: 46.3416% ( 314) 00:11:23.909 9294.196 - 9353.775: 48.7154% ( 316) 00:11:23.909 9353.775 - 9413.353: 51.0817% ( 315) 00:11:23.909 9413.353 - 9472.931: 53.3128% ( 297) 00:11:23.909 9472.931 - 9532.509: 55.6716% ( 314) 00:11:23.909 9532.509 - 9592.087: 57.8425% ( 289) 00:11:23.909 9592.087 - 9651.665: 60.1187% ( 303) 00:11:23.909 9651.665 - 9711.244: 62.4023% ( 304) 00:11:23.909 9711.244 - 9770.822: 64.7160% ( 308) 00:11:23.909 9770.822 - 9830.400: 67.0147% ( 306) 00:11:23.909 9830.400 - 9889.978: 69.3434% ( 310) 00:11:23.909 9889.978 - 9949.556: 71.6647% ( 309) 00:11:23.909 9949.556 - 10009.135: 73.8582% ( 292) 00:11:23.909 10009.135 - 10068.713: 75.9991% ( 285) 00:11:23.909 10068.713 - 10128.291: 78.0574% ( 274) 00:11:23.909 10128.291 - 10187.869: 79.8828% ( 243) 00:11:23.909 10187.869 - 10247.447: 81.4603% ( 210) 00:11:23.909 10247.447 - 10307.025: 82.8425% ( 184) 00:11:23.909 10307.025 - 10366.604: 84.0219% ( 157) 00:11:23.909 10366.604 - 10426.182: 85.1262% ( 147) 00:11:23.909 10426.182 - 10485.760: 86.1629% ( 138) 00:11:23.909 10485.760 - 10545.338: 86.9742% ( 108) 00:11:23.909 10545.338 - 10604.916: 87.7704% ( 106) 00:11:23.909 10604.916 - 10664.495: 88.5742% ( 107) 00:11:23.909 10664.495 - 10724.073: 89.3254% ( 100) 00:11:23.909 10724.073 - 10783.651: 90.0691% ( 99) 00:11:23.909 10783.651 - 10843.229: 90.7302% ( 88) 00:11:23.909 10843.229 - 10902.807: 91.4138% ( 91) 00:11:23.909 10902.807 - 10962.385: 92.0373% ( 83) 00:11:23.909 10962.385 - 11021.964: 92.5856% ( 73) 00:11:23.909 11021.964 - 11081.542: 93.0965% ( 68) 00:11:23.909 11081.542 - 11141.120: 93.5697% ( 63) 00:11:23.909 11141.120 - 11200.698: 93.9904% ( 56) 00:11:23.909 11200.698 - 11260.276: 94.3960% ( 54) 00:11:23.909 11260.276 - 11319.855: 94.7716% ( 50) 00:11:23.909 11319.855 - 11379.433: 95.1848% ( 55) 00:11:23.909 11379.433 - 11439.011: 95.5604% ( 50) 00:11:23.909 11439.011 - 11498.589: 95.8984% ( 45) 00:11:23.909 11498.589 - 11558.167: 96.1914% ( 39) 00:11:23.909 11558.167 - 11617.745: 96.4994% ( 41) 00:11:23.909 11617.745 - 11677.324: 96.7924% ( 39) 00:11:23.909 11677.324 - 11736.902: 97.0628% ( 36) 00:11:23.909 11736.902 - 11796.480: 97.3182% ( 34) 00:11:23.909 11796.480 - 11856.058: 97.5586% ( 32) 00:11:23.909 11856.058 - 11915.636: 97.7539% ( 26) 00:11:23.909 11915.636 - 11975.215: 97.9192% ( 22) 00:11:23.909 11975.215 - 12034.793: 98.0469% ( 17) 00:11:23.909 12034.793 - 12094.371: 98.1520% ( 14) 00:11:23.909 12094.371 - 12153.949: 98.2647% ( 15) 00:11:23.909 12153.949 - 12213.527: 98.3849% ( 16) 00:11:23.909 12213.527 - 12273.105: 98.5051% ( 16) 00:11:23.909 12273.105 - 12332.684: 98.5953% ( 12) 00:11:23.909 12332.684 - 12392.262: 98.6854% ( 12) 00:11:23.909 12392.262 - 12451.840: 98.7530% ( 9) 00:11:23.909 12451.840 - 12511.418: 98.8206% ( 9) 00:11:23.909 12511.418 - 12570.996: 98.8582% ( 5) 00:11:23.909 12570.996 - 12630.575: 98.8957% ( 5) 00:11:23.909 12630.575 - 12690.153: 98.9108% ( 2) 00:11:23.909 12690.153 - 12749.731: 98.9333% ( 3) 00:11:23.909 12749.731 - 12809.309: 98.9558% ( 3) 00:11:23.909 12809.309 - 12868.887: 98.9859% ( 4) 00:11:23.909 12868.887 - 12928.465: 99.0084% ( 3) 00:11:23.910 12928.465 - 12988.044: 99.0159% ( 1) 00:11:23.910 12988.044 - 13047.622: 99.0385% ( 3) 00:11:23.910 25499.462 - 25618.618: 99.0460% ( 1) 00:11:23.910 25618.618 - 25737.775: 99.0685% ( 3) 00:11:23.910 25737.775 - 25856.931: 99.0910% ( 3) 00:11:23.910 25856.931 - 25976.087: 99.1136% ( 3) 00:11:23.910 25976.087 - 26095.244: 99.1361% ( 3) 00:11:23.910 26095.244 - 26214.400: 99.1587% ( 3) 00:11:23.910 26214.400 - 26333.556: 99.1812% ( 3) 00:11:23.910 26333.556 - 26452.713: 99.2037% ( 3) 00:11:23.910 26452.713 - 26571.869: 99.2338% ( 4) 00:11:23.910 26571.869 - 26691.025: 99.2563% ( 3) 00:11:23.910 26691.025 - 26810.182: 99.2788% ( 3) 00:11:23.910 26810.182 - 26929.338: 99.3014% ( 3) 00:11:23.910 26929.338 - 27048.495: 99.3239% ( 3) 00:11:23.910 27048.495 - 27167.651: 99.3465% ( 3) 00:11:23.910 27167.651 - 27286.807: 99.3690% ( 3) 00:11:23.910 27286.807 - 27405.964: 99.3990% ( 4) 00:11:23.910 27405.964 - 27525.120: 99.4216% ( 3) 00:11:23.910 27525.120 - 27644.276: 99.4441% ( 3) 00:11:23.910 27644.276 - 27763.433: 99.4666% ( 3) 00:11:23.910 27763.433 - 27882.589: 99.4892% ( 3) 00:11:23.910 27882.589 - 28001.745: 99.5117% ( 3) 00:11:23.910 28001.745 - 28120.902: 99.5343% ( 3) 00:11:23.910 28120.902 - 28240.058: 99.5643% ( 4) 00:11:23.910 28240.058 - 28359.215: 99.5868% ( 3) 00:11:23.910 28359.215 - 28478.371: 99.6094% ( 3) 00:11:23.910 28478.371 - 28597.527: 99.6244% ( 2) 00:11:23.910 28597.527 - 28716.684: 99.6544% ( 4) 00:11:23.910 28716.684 - 28835.840: 99.6770% ( 3) 00:11:23.910 28835.840 - 28954.996: 99.6995% ( 3) 00:11:23.910 28954.996 - 29074.153: 99.7221% ( 3) 00:11:23.910 29074.153 - 29193.309: 99.7521% ( 4) 00:11:23.910 29193.309 - 29312.465: 99.7671% ( 2) 00:11:23.910 29312.465 - 29431.622: 99.7897% ( 3) 00:11:23.910 29431.622 - 29550.778: 99.8122% ( 3) 00:11:23.910 29550.778 - 29669.935: 99.8422% ( 4) 00:11:23.910 29669.935 - 29789.091: 99.8648% ( 3) 00:11:23.910 29789.091 - 29908.247: 99.8873% ( 3) 00:11:23.910 29908.247 - 30027.404: 99.9099% ( 3) 00:11:23.910 30027.404 - 30146.560: 99.9324% ( 3) 00:11:23.910 30146.560 - 30265.716: 99.9549% ( 3) 00:11:23.910 30265.716 - 30384.873: 99.9775% ( 3) 00:11:23.910 30384.873 - 30504.029: 100.0000% ( 3) 00:11:23.910 00:11:23.910 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:11:23.910 ============================================================================== 00:11:23.910 Range in us Cumulative IO count 00:11:23.910 7000.436 - 7030.225: 0.0075% ( 1) 00:11:23.910 7030.225 - 7060.015: 0.0225% ( 2) 00:11:23.910 7060.015 - 7089.804: 0.0526% ( 4) 00:11:23.910 7089.804 - 7119.593: 0.1052% ( 7) 00:11:23.910 7119.593 - 7149.382: 0.1728% ( 9) 00:11:23.910 7149.382 - 7179.171: 0.2178% ( 6) 00:11:23.910 7179.171 - 7208.960: 0.3080% ( 12) 00:11:23.910 7208.960 - 7238.749: 0.4282% ( 16) 00:11:23.910 7238.749 - 7268.538: 0.5709% ( 19) 00:11:23.910 7268.538 - 7298.327: 0.6986% ( 17) 00:11:23.910 7298.327 - 7328.116: 0.8564% ( 21) 00:11:23.910 7328.116 - 7357.905: 0.9991% ( 19) 00:11:23.910 7357.905 - 7387.695: 1.1569% ( 21) 00:11:23.910 7387.695 - 7417.484: 1.3296% ( 23) 00:11:23.910 7417.484 - 7447.273: 1.4949% ( 22) 00:11:23.910 7447.273 - 7477.062: 1.6602% ( 22) 00:11:23.910 7477.062 - 7506.851: 1.8104% ( 20) 00:11:23.910 7506.851 - 7536.640: 1.9757% ( 22) 00:11:23.910 7536.640 - 7566.429: 2.1484% ( 23) 00:11:23.910 7566.429 - 7596.218: 2.3137% ( 22) 00:11:23.910 7596.218 - 7626.007: 2.4790% ( 22) 00:11:23.910 7626.007 - 7685.585: 2.7870% ( 41) 00:11:23.910 7685.585 - 7745.164: 3.1175% ( 44) 00:11:23.910 7745.164 - 7804.742: 3.4931% ( 50) 00:11:23.910 7804.742 - 7864.320: 3.8462% ( 47) 00:11:23.910 7864.320 - 7923.898: 4.2819% ( 58) 00:11:23.910 7923.898 - 7983.476: 4.7551% ( 63) 00:11:23.910 7983.476 - 8043.055: 5.3711% ( 82) 00:11:23.910 8043.055 - 8102.633: 6.1974% ( 110) 00:11:23.910 8102.633 - 8162.211: 7.1590% ( 128) 00:11:23.910 8162.211 - 8221.789: 8.2933% ( 151) 00:11:23.910 8221.789 - 8281.367: 9.5553% ( 168) 00:11:23.910 8281.367 - 8340.945: 11.0802% ( 203) 00:11:23.910 8340.945 - 8400.524: 12.7855% ( 227) 00:11:23.910 8400.524 - 8460.102: 14.5883% ( 240) 00:11:23.910 8460.102 - 8519.680: 16.5415% ( 260) 00:11:23.910 8519.680 - 8579.258: 18.6448% ( 280) 00:11:23.910 8579.258 - 8638.836: 20.7482% ( 280) 00:11:23.910 8638.836 - 8698.415: 22.9868% ( 298) 00:11:23.910 8698.415 - 8757.993: 25.2404% ( 300) 00:11:23.910 8757.993 - 8817.571: 27.5240% ( 304) 00:11:23.910 8817.571 - 8877.149: 29.9129% ( 318) 00:11:23.910 8877.149 - 8936.727: 32.3468% ( 324) 00:11:23.910 8936.727 - 8996.305: 34.7581% ( 321) 00:11:23.910 8996.305 - 9055.884: 37.2296% ( 329) 00:11:23.910 9055.884 - 9115.462: 39.6559% ( 323) 00:11:23.910 9115.462 - 9175.040: 42.0748% ( 322) 00:11:23.910 9175.040 - 9234.618: 44.4411% ( 315) 00:11:23.910 9234.618 - 9294.196: 46.8975% ( 327) 00:11:23.910 9294.196 - 9353.775: 49.3164% ( 322) 00:11:23.910 9353.775 - 9413.353: 51.7127% ( 319) 00:11:23.910 9413.353 - 9472.931: 54.0039% ( 305) 00:11:23.910 9472.931 - 9532.509: 56.2725% ( 302) 00:11:23.910 9532.509 - 9592.087: 58.4660% ( 292) 00:11:23.910 9592.087 - 9651.665: 60.7272% ( 301) 00:11:23.910 9651.665 - 9711.244: 63.0484% ( 309) 00:11:23.910 9711.244 - 9770.822: 65.3546% ( 307) 00:11:23.910 9770.822 - 9830.400: 67.7058% ( 313) 00:11:23.910 9830.400 - 9889.978: 69.9895% ( 304) 00:11:23.910 9889.978 - 9949.556: 72.2656% ( 303) 00:11:23.910 9949.556 - 10009.135: 74.6019% ( 311) 00:11:23.910 10009.135 - 10068.713: 76.7803% ( 290) 00:11:23.910 10068.713 - 10128.291: 78.9213% ( 285) 00:11:23.910 10128.291 - 10187.869: 80.7918% ( 249) 00:11:23.910 10187.869 - 10247.447: 82.3618% ( 209) 00:11:23.910 10247.447 - 10307.025: 83.7215% ( 181) 00:11:23.910 10307.025 - 10366.604: 84.8708% ( 153) 00:11:23.910 10366.604 - 10426.182: 85.8699% ( 133) 00:11:23.910 10426.182 - 10485.760: 86.7037% ( 111) 00:11:23.910 10485.760 - 10545.338: 87.5000% ( 106) 00:11:23.910 10545.338 - 10604.916: 88.2212% ( 96) 00:11:23.910 10604.916 - 10664.495: 88.9123% ( 92) 00:11:23.910 10664.495 - 10724.073: 89.5883% ( 90) 00:11:23.910 10724.073 - 10783.651: 90.1968% ( 81) 00:11:23.910 10783.651 - 10843.229: 90.7752% ( 77) 00:11:23.910 10843.229 - 10902.807: 91.3236% ( 73) 00:11:23.910 10902.807 - 10962.385: 91.8495% ( 70) 00:11:23.910 10962.385 - 11021.964: 92.3152% ( 62) 00:11:23.910 11021.964 - 11081.542: 92.7058% ( 52) 00:11:23.910 11081.542 - 11141.120: 93.0965% ( 52) 00:11:23.910 11141.120 - 11200.698: 93.4721% ( 50) 00:11:23.910 11200.698 - 11260.276: 93.8477% ( 50) 00:11:23.910 11260.276 - 11319.855: 94.1932% ( 46) 00:11:23.910 11319.855 - 11379.433: 94.5688% ( 50) 00:11:23.910 11379.433 - 11439.011: 94.9294% ( 48) 00:11:23.910 11439.011 - 11498.589: 95.2900% ( 48) 00:11:23.910 11498.589 - 11558.167: 95.5904% ( 40) 00:11:23.910 11558.167 - 11617.745: 95.8609% ( 36) 00:11:23.910 11617.745 - 11677.324: 96.1163% ( 34) 00:11:23.910 11677.324 - 11736.902: 96.3942% ( 37) 00:11:23.910 11736.902 - 11796.480: 96.6421% ( 33) 00:11:23.910 11796.480 - 11856.058: 96.8525% ( 28) 00:11:23.910 11856.058 - 11915.636: 97.0553% ( 27) 00:11:23.910 11915.636 - 11975.215: 97.2281% ( 23) 00:11:23.910 11975.215 - 12034.793: 97.4008% ( 23) 00:11:23.910 12034.793 - 12094.371: 97.5511% ( 20) 00:11:23.910 12094.371 - 12153.949: 97.6713% ( 16) 00:11:23.910 12153.949 - 12213.527: 97.7915% ( 16) 00:11:23.910 12213.527 - 12273.105: 97.9041% ( 15) 00:11:23.910 12273.105 - 12332.684: 98.0168% ( 15) 00:11:23.910 12332.684 - 12392.262: 98.1295% ( 15) 00:11:23.910 12392.262 - 12451.840: 98.2347% ( 14) 00:11:23.910 12451.840 - 12511.418: 98.3173% ( 11) 00:11:23.910 12511.418 - 12570.996: 98.4225% ( 14) 00:11:23.910 12570.996 - 12630.575: 98.5201% ( 13) 00:11:23.910 12630.575 - 12690.153: 98.6103% ( 12) 00:11:23.910 12690.153 - 12749.731: 98.7079% ( 13) 00:11:23.910 12749.731 - 12809.309: 98.7831% ( 10) 00:11:23.910 12809.309 - 12868.887: 98.8507% ( 9) 00:11:23.910 12868.887 - 12928.465: 98.9258% ( 10) 00:11:23.910 12928.465 - 12988.044: 98.9784% ( 7) 00:11:23.910 12988.044 - 13047.622: 98.9934% ( 2) 00:11:23.910 13047.622 - 13107.200: 99.0159% ( 3) 00:11:23.910 13107.200 - 13166.778: 99.0309% ( 2) 00:11:23.910 13166.778 - 13226.356: 99.0385% ( 1) 00:11:23.910 23950.429 - 24069.585: 99.0610% ( 3) 00:11:23.910 24069.585 - 24188.742: 99.0835% ( 3) 00:11:23.910 24188.742 - 24307.898: 99.1061% ( 3) 00:11:23.910 24307.898 - 24427.055: 99.1286% ( 3) 00:11:23.910 24427.055 - 24546.211: 99.1436% ( 2) 00:11:23.910 24546.211 - 24665.367: 99.1662% ( 3) 00:11:23.910 24665.367 - 24784.524: 99.1887% ( 3) 00:11:23.910 24784.524 - 24903.680: 99.2112% ( 3) 00:11:23.910 24903.680 - 25022.836: 99.2413% ( 4) 00:11:23.910 25022.836 - 25141.993: 99.2638% ( 3) 00:11:23.910 25141.993 - 25261.149: 99.2864% ( 3) 00:11:23.910 25261.149 - 25380.305: 99.3089% ( 3) 00:11:23.910 25380.305 - 25499.462: 99.3314% ( 3) 00:11:23.910 25499.462 - 25618.618: 99.3540% ( 3) 00:11:23.910 25618.618 - 25737.775: 99.3765% ( 3) 00:11:23.910 25737.775 - 25856.931: 99.3990% ( 3) 00:11:23.910 25856.931 - 25976.087: 99.4291% ( 4) 00:11:23.910 25976.087 - 26095.244: 99.4516% ( 3) 00:11:23.910 26095.244 - 26214.400: 99.4742% ( 3) 00:11:23.910 26214.400 - 26333.556: 99.4967% ( 3) 00:11:23.910 26333.556 - 26452.713: 99.5192% ( 3) 00:11:23.910 26452.713 - 26571.869: 99.5493% ( 4) 00:11:23.910 26571.869 - 26691.025: 99.5718% ( 3) 00:11:23.910 26691.025 - 26810.182: 99.5868% ( 2) 00:11:23.910 26810.182 - 26929.338: 99.6094% ( 3) 00:11:23.910 26929.338 - 27048.495: 99.6244% ( 2) 00:11:23.910 27048.495 - 27167.651: 99.6544% ( 4) 00:11:23.910 27167.651 - 27286.807: 99.6770% ( 3) 00:11:23.910 27286.807 - 27405.964: 99.6995% ( 3) 00:11:23.910 27405.964 - 27525.120: 99.7221% ( 3) 00:11:23.910 27525.120 - 27644.276: 99.7446% ( 3) 00:11:23.910 27644.276 - 27763.433: 99.7671% ( 3) 00:11:23.910 27763.433 - 27882.589: 99.7972% ( 4) 00:11:23.910 27882.589 - 28001.745: 99.8197% ( 3) 00:11:23.910 28001.745 - 28120.902: 99.8422% ( 3) 00:11:23.910 28120.902 - 28240.058: 99.8648% ( 3) 00:11:23.910 28240.058 - 28359.215: 99.8873% ( 3) 00:11:23.910 28359.215 - 28478.371: 99.9099% ( 3) 00:11:23.910 28478.371 - 28597.527: 99.9399% ( 4) 00:11:23.910 28597.527 - 28716.684: 99.9624% ( 3) 00:11:23.910 28716.684 - 28835.840: 99.9850% ( 3) 00:11:23.910 28835.840 - 28954.996: 100.0000% ( 2) 00:11:23.910 00:11:23.910 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:11:23.910 ============================================================================== 00:11:23.910 Range in us Cumulative IO count 00:11:23.910 6940.858 - 6970.647: 0.0075% ( 1) 00:11:23.910 6970.647 - 7000.436: 0.0225% ( 2) 00:11:23.910 7000.436 - 7030.225: 0.0300% ( 1) 00:11:23.910 7030.225 - 7060.015: 0.0451% ( 2) 00:11:23.910 7060.015 - 7089.804: 0.0601% ( 2) 00:11:23.910 7089.804 - 7119.593: 0.0977% ( 5) 00:11:23.910 7119.593 - 7149.382: 0.1728% ( 10) 00:11:23.910 7149.382 - 7179.171: 0.2404% ( 9) 00:11:23.910 7179.171 - 7208.960: 0.3230% ( 11) 00:11:23.910 7208.960 - 7238.749: 0.4282% ( 14) 00:11:23.910 7238.749 - 7268.538: 0.5484% ( 16) 00:11:23.910 7268.538 - 7298.327: 0.6836% ( 18) 00:11:23.910 7298.327 - 7328.116: 0.8113% ( 17) 00:11:23.911 7328.116 - 7357.905: 0.9691% ( 21) 00:11:23.911 7357.905 - 7387.695: 1.1193% ( 20) 00:11:23.911 7387.695 - 7417.484: 1.2921% ( 23) 00:11:23.911 7417.484 - 7447.273: 1.4573% ( 22) 00:11:23.911 7447.273 - 7477.062: 1.5925% ( 18) 00:11:23.911 7477.062 - 7506.851: 1.7578% ( 22) 00:11:23.911 7506.851 - 7536.640: 1.9081% ( 20) 00:11:23.911 7536.640 - 7566.429: 2.0733% ( 22) 00:11:23.911 7566.429 - 7596.218: 2.2386% ( 22) 00:11:23.911 7596.218 - 7626.007: 2.3813% ( 19) 00:11:23.911 7626.007 - 7685.585: 2.6818% ( 40) 00:11:23.911 7685.585 - 7745.164: 3.0123% ( 44) 00:11:23.911 7745.164 - 7804.742: 3.3579% ( 46) 00:11:23.911 7804.742 - 7864.320: 3.7485% ( 52) 00:11:23.911 7864.320 - 7923.898: 4.1466% ( 53) 00:11:23.911 7923.898 - 7983.476: 4.6499% ( 67) 00:11:23.911 7983.476 - 8043.055: 5.2885% ( 85) 00:11:23.911 8043.055 - 8102.633: 6.1373% ( 113) 00:11:23.911 8102.633 - 8162.211: 7.0989% ( 128) 00:11:23.911 8162.211 - 8221.789: 8.1956% ( 146) 00:11:23.911 8221.789 - 8281.367: 9.5478% ( 180) 00:11:23.911 8281.367 - 8340.945: 11.1253% ( 210) 00:11:23.911 8340.945 - 8400.524: 12.7855% ( 221) 00:11:23.911 8400.524 - 8460.102: 14.5733% ( 238) 00:11:23.911 8460.102 - 8519.680: 16.4964% ( 256) 00:11:23.911 8519.680 - 8579.258: 18.5246% ( 270) 00:11:23.911 8579.258 - 8638.836: 20.6355% ( 281) 00:11:23.911 8638.836 - 8698.415: 22.9342% ( 306) 00:11:23.911 8698.415 - 8757.993: 25.2178% ( 304) 00:11:23.911 8757.993 - 8817.571: 27.5992% ( 317) 00:11:23.911 8817.571 - 8877.149: 30.0406% ( 325) 00:11:23.911 8877.149 - 8936.727: 32.4594% ( 322) 00:11:23.911 8936.727 - 8996.305: 34.9084% ( 326) 00:11:23.911 8996.305 - 9055.884: 37.3197% ( 321) 00:11:23.911 9055.884 - 9115.462: 39.7461% ( 323) 00:11:23.911 9115.462 - 9175.040: 42.1124% ( 315) 00:11:23.911 9175.040 - 9234.618: 44.6289% ( 335) 00:11:23.911 9234.618 - 9294.196: 47.0628% ( 324) 00:11:23.911 9294.196 - 9353.775: 49.4967% ( 324) 00:11:23.911 9353.775 - 9413.353: 51.8555% ( 314) 00:11:23.911 9413.353 - 9472.931: 54.0640% ( 294) 00:11:23.911 9472.931 - 9532.509: 56.3777% ( 308) 00:11:23.911 9532.509 - 9592.087: 58.5938% ( 295) 00:11:23.911 9592.087 - 9651.665: 60.8098% ( 295) 00:11:23.911 9651.665 - 9711.244: 63.0634% ( 300) 00:11:23.911 9711.244 - 9770.822: 65.3320% ( 302) 00:11:23.911 9770.822 - 9830.400: 67.6307% ( 306) 00:11:23.911 9830.400 - 9889.978: 69.9369% ( 307) 00:11:23.911 9889.978 - 9949.556: 72.2356% ( 306) 00:11:23.911 9949.556 - 10009.135: 74.5718% ( 311) 00:11:23.911 10009.135 - 10068.713: 76.8705% ( 306) 00:11:23.911 10068.713 - 10128.291: 78.9438% ( 276) 00:11:23.911 10128.291 - 10187.869: 80.7166% ( 236) 00:11:23.911 10187.869 - 10247.447: 82.2266% ( 201) 00:11:23.911 10247.447 - 10307.025: 83.6313% ( 187) 00:11:23.911 10307.025 - 10366.604: 84.7506% ( 149) 00:11:23.911 10366.604 - 10426.182: 85.7497% ( 133) 00:11:23.911 10426.182 - 10485.760: 86.5535% ( 107) 00:11:23.911 10485.760 - 10545.338: 87.2671% ( 95) 00:11:23.911 10545.338 - 10604.916: 87.9582% ( 92) 00:11:23.911 10604.916 - 10664.495: 88.6193% ( 88) 00:11:23.911 10664.495 - 10724.073: 89.2954% ( 90) 00:11:23.911 10724.073 - 10783.651: 89.9114% ( 82) 00:11:23.911 10783.651 - 10843.229: 90.5349% ( 83) 00:11:23.911 10843.229 - 10902.807: 91.0832% ( 73) 00:11:23.911 10902.807 - 10962.385: 91.5865% ( 67) 00:11:23.911 10962.385 - 11021.964: 92.0823% ( 66) 00:11:23.911 11021.964 - 11081.542: 92.5556% ( 63) 00:11:23.911 11081.542 - 11141.120: 92.9838% ( 57) 00:11:23.911 11141.120 - 11200.698: 93.4195% ( 58) 00:11:23.911 11200.698 - 11260.276: 93.8176% ( 53) 00:11:23.911 11260.276 - 11319.855: 94.1857% ( 49) 00:11:23.911 11319.855 - 11379.433: 94.5388% ( 47) 00:11:23.911 11379.433 - 11439.011: 94.8392% ( 40) 00:11:23.911 11439.011 - 11498.589: 95.1247% ( 38) 00:11:23.911 11498.589 - 11558.167: 95.4252% ( 40) 00:11:23.911 11558.167 - 11617.745: 95.6731% ( 33) 00:11:23.911 11617.745 - 11677.324: 95.9435% ( 36) 00:11:23.911 11677.324 - 11736.902: 96.2064% ( 35) 00:11:23.911 11736.902 - 11796.480: 96.4393% ( 31) 00:11:23.911 11796.480 - 11856.058: 96.6947% ( 34) 00:11:23.911 11856.058 - 11915.636: 96.8975% ( 27) 00:11:23.911 11915.636 - 11975.215: 97.1079% ( 28) 00:11:23.911 11975.215 - 12034.793: 97.2957% ( 25) 00:11:23.911 12034.793 - 12094.371: 97.4835% ( 25) 00:11:23.911 12094.371 - 12153.949: 97.6337% ( 20) 00:11:23.911 12153.949 - 12213.527: 97.7840% ( 20) 00:11:23.911 12213.527 - 12273.105: 97.9342% ( 20) 00:11:23.911 12273.105 - 12332.684: 98.0769% ( 19) 00:11:23.911 12332.684 - 12392.262: 98.1746% ( 13) 00:11:23.911 12392.262 - 12451.840: 98.2647% ( 12) 00:11:23.911 12451.840 - 12511.418: 98.3398% ( 10) 00:11:23.911 12511.418 - 12570.996: 98.4225% ( 11) 00:11:23.911 12570.996 - 12630.575: 98.4976% ( 10) 00:11:23.911 12630.575 - 12690.153: 98.5802% ( 11) 00:11:23.911 12690.153 - 12749.731: 98.6403% ( 8) 00:11:23.911 12749.731 - 12809.309: 98.6929% ( 7) 00:11:23.911 12809.309 - 12868.887: 98.7380% ( 6) 00:11:23.911 12868.887 - 12928.465: 98.7530% ( 2) 00:11:23.911 12928.465 - 12988.044: 98.7680% ( 2) 00:11:23.911 12988.044 - 13047.622: 98.7831% ( 2) 00:11:23.911 13047.622 - 13107.200: 98.8056% ( 3) 00:11:23.911 13107.200 - 13166.778: 98.8206% ( 2) 00:11:23.911 13166.778 - 13226.356: 98.8356% ( 2) 00:11:23.911 13226.356 - 13285.935: 98.8582% ( 3) 00:11:23.911 13285.935 - 13345.513: 98.8732% ( 2) 00:11:23.911 13345.513 - 13405.091: 98.8957% ( 3) 00:11:23.911 13405.091 - 13464.669: 98.9108% ( 2) 00:11:23.911 13464.669 - 13524.247: 98.9333% ( 3) 00:11:23.911 13524.247 - 13583.825: 98.9483% ( 2) 00:11:23.911 13583.825 - 13643.404: 98.9709% ( 3) 00:11:23.911 13643.404 - 13702.982: 98.9859% ( 2) 00:11:23.911 13702.982 - 13762.560: 99.0084% ( 3) 00:11:23.911 13762.560 - 13822.138: 99.0234% ( 2) 00:11:23.911 13822.138 - 13881.716: 99.0385% ( 2) 00:11:23.911 22282.240 - 22401.396: 99.0610% ( 3) 00:11:23.911 22401.396 - 22520.553: 99.0835% ( 3) 00:11:23.911 22520.553 - 22639.709: 99.1136% ( 4) 00:11:23.911 22639.709 - 22758.865: 99.1361% ( 3) 00:11:23.911 22758.865 - 22878.022: 99.1587% ( 3) 00:11:23.911 22878.022 - 22997.178: 99.1812% ( 3) 00:11:23.911 22997.178 - 23116.335: 99.2037% ( 3) 00:11:23.911 23116.335 - 23235.491: 99.2263% ( 3) 00:11:23.911 23235.491 - 23354.647: 99.2563% ( 4) 00:11:23.911 23354.647 - 23473.804: 99.2788% ( 3) 00:11:23.911 23473.804 - 23592.960: 99.3014% ( 3) 00:11:23.911 23592.960 - 23712.116: 99.3239% ( 3) 00:11:23.911 23712.116 - 23831.273: 99.3465% ( 3) 00:11:23.911 23831.273 - 23950.429: 99.3690% ( 3) 00:11:23.911 23950.429 - 24069.585: 99.3915% ( 3) 00:11:23.911 24069.585 - 24188.742: 99.4141% ( 3) 00:11:23.911 24188.742 - 24307.898: 99.4366% ( 3) 00:11:23.911 24307.898 - 24427.055: 99.4591% ( 3) 00:11:23.911 24427.055 - 24546.211: 99.4817% ( 3) 00:11:23.911 24546.211 - 24665.367: 99.5117% ( 4) 00:11:23.911 24665.367 - 24784.524: 99.5343% ( 3) 00:11:23.911 24784.524 - 24903.680: 99.5568% ( 3) 00:11:23.911 24903.680 - 25022.836: 99.5793% ( 3) 00:11:23.911 25022.836 - 25141.993: 99.6019% ( 3) 00:11:23.911 25141.993 - 25261.149: 99.6244% ( 3) 00:11:23.911 25261.149 - 25380.305: 99.6544% ( 4) 00:11:23.911 25380.305 - 25499.462: 99.6770% ( 3) 00:11:23.911 25499.462 - 25618.618: 99.6995% ( 3) 00:11:23.911 25618.618 - 25737.775: 99.7221% ( 3) 00:11:23.911 25737.775 - 25856.931: 99.7446% ( 3) 00:11:23.911 25856.931 - 25976.087: 99.7671% ( 3) 00:11:23.911 25976.087 - 26095.244: 99.7972% ( 4) 00:11:23.911 26095.244 - 26214.400: 99.8197% ( 3) 00:11:23.911 26214.400 - 26333.556: 99.8422% ( 3) 00:11:23.911 26333.556 - 26452.713: 99.8648% ( 3) 00:11:23.911 26452.713 - 26571.869: 99.8873% ( 3) 00:11:23.911 26571.869 - 26691.025: 99.9174% ( 4) 00:11:23.911 26691.025 - 26810.182: 99.9399% ( 3) 00:11:23.911 26810.182 - 26929.338: 99.9624% ( 3) 00:11:23.911 26929.338 - 27048.495: 99.9850% ( 3) 00:11:23.911 27048.495 - 27167.651: 100.0000% ( 2) 00:11:23.911 00:11:23.911 15:36:45 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:25.809 Initializing NVMe Controllers 00:11:25.809 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:25.809 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:25.809 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:25.809 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:25.809 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:25.809 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:11:25.809 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:11:25.809 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:11:25.809 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:11:25.809 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:11:25.809 Initialization complete. Launching workers. 00:11:25.809 ======================================================== 00:11:25.809 Latency(us) 00:11:25.809 Device Information : IOPS MiB/s Average min max 00:11:25.809 PCIE (0000:00:06.0) NSID 1 from core 0: 10817.65 126.77 11825.13 8072.37 35108.48 00:11:25.809 PCIE (0000:00:07.0) NSID 1 from core 0: 10817.65 126.77 11810.82 8371.61 33164.81 00:11:25.809 PCIE (0000:00:09.0) NSID 1 from core 0: 10817.65 126.77 11795.32 8265.41 34392.94 00:11:25.809 PCIE (0000:00:08.0) NSID 1 from core 0: 10817.65 126.77 11779.94 8416.55 32855.76 00:11:25.809 PCIE (0000:00:08.0) NSID 2 from core 0: 10817.65 126.77 11764.85 8356.77 31604.44 00:11:25.809 PCIE (0000:00:08.0) NSID 3 from core 0: 10817.65 126.77 11749.72 8367.31 30164.72 00:11:25.809 ======================================================== 00:11:25.809 Total : 64905.88 760.62 11787.63 8072.37 35108.48 00:11:25.809 00:11:25.809 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:25.809 ================================================================================= 00:11:25.809 1.00000% : 8698.415us 00:11:25.809 10.00000% : 9830.400us 00:11:25.809 25.00000% : 10604.916us 00:11:25.809 50.00000% : 11558.167us 00:11:25.809 75.00000% : 12511.418us 00:11:25.809 90.00000% : 13643.404us 00:11:25.809 95.00000% : 14537.076us 00:11:25.809 98.00000% : 16086.109us 00:11:25.809 99.00000% : 30742.342us 00:11:25.809 99.50000% : 33125.469us 00:11:25.809 99.90000% : 34793.658us 00:11:25.809 99.99000% : 35031.971us 00:11:25.809 99.99900% : 35270.284us 00:11:25.809 99.99990% : 35270.284us 00:11:25.809 99.99999% : 35270.284us 00:11:25.809 00:11:25.809 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:11:25.809 ================================================================================= 00:11:25.809 1.00000% : 8817.571us 00:11:25.809 10.00000% : 9949.556us 00:11:25.809 25.00000% : 10664.495us 00:11:25.809 50.00000% : 11558.167us 00:11:25.809 75.00000% : 12451.840us 00:11:25.809 90.00000% : 13583.825us 00:11:25.809 95.00000% : 14417.920us 00:11:25.809 98.00000% : 15728.640us 00:11:25.809 99.00000% : 29193.309us 00:11:25.809 99.50000% : 31218.967us 00:11:25.809 99.90000% : 32887.156us 00:11:25.809 99.99000% : 33363.782us 00:11:25.809 99.99900% : 33363.782us 00:11:25.809 99.99990% : 33363.782us 00:11:25.809 99.99999% : 33363.782us 00:11:25.809 00:11:25.809 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:11:25.809 ================================================================================= 00:11:25.809 1.00000% : 8817.571us 00:11:25.809 10.00000% : 9949.556us 00:11:25.809 25.00000% : 10664.495us 00:11:25.809 50.00000% : 11498.589us 00:11:25.809 75.00000% : 12392.262us 00:11:25.809 90.00000% : 13524.247us 00:11:25.809 95.00000% : 14417.920us 00:11:25.809 98.00000% : 15728.640us 00:11:25.809 99.00000% : 30265.716us 00:11:25.809 99.50000% : 32410.531us 00:11:25.809 99.90000% : 34078.720us 00:11:25.809 99.99000% : 34555.345us 00:11:25.809 99.99900% : 34555.345us 00:11:25.810 99.99990% : 34555.345us 00:11:25.810 99.99999% : 34555.345us 00:11:25.810 00:11:25.810 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:11:25.810 ================================================================================= 00:11:25.810 1.00000% : 8877.149us 00:11:25.810 10.00000% : 9949.556us 00:11:25.810 25.00000% : 10664.495us 00:11:25.810 50.00000% : 11498.589us 00:11:25.810 75.00000% : 12451.840us 00:11:25.810 90.00000% : 13464.669us 00:11:25.810 95.00000% : 14239.185us 00:11:25.810 98.00000% : 15728.640us 00:11:25.810 99.00000% : 28835.840us 00:11:25.810 99.50000% : 30980.655us 00:11:25.810 99.90000% : 32648.844us 00:11:25.810 99.99000% : 32887.156us 00:11:25.810 99.99900% : 32887.156us 00:11:25.810 99.99990% : 32887.156us 00:11:25.810 99.99999% : 32887.156us 00:11:25.810 00:11:25.810 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:11:25.810 ================================================================================= 00:11:25.810 1.00000% : 8877.149us 00:11:25.810 10.00000% : 9949.556us 00:11:25.810 25.00000% : 10664.495us 00:11:25.810 50.00000% : 11498.589us 00:11:25.810 75.00000% : 12392.262us 00:11:25.810 90.00000% : 13405.091us 00:11:25.810 95.00000% : 14298.764us 00:11:25.810 98.00000% : 15847.796us 00:11:25.810 99.00000% : 27525.120us 00:11:25.810 99.50000% : 29669.935us 00:11:25.810 99.90000% : 31457.280us 00:11:25.810 99.99000% : 31695.593us 00:11:25.810 99.99900% : 31695.593us 00:11:25.810 99.99990% : 31695.593us 00:11:25.810 99.99999% : 31695.593us 00:11:25.810 00:11:25.810 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:11:25.810 ================================================================================= 00:11:25.810 1.00000% : 8877.149us 00:11:25.810 10.00000% : 9949.556us 00:11:25.810 25.00000% : 10664.495us 00:11:25.810 50.00000% : 11498.589us 00:11:25.810 75.00000% : 12392.262us 00:11:25.810 90.00000% : 13464.669us 00:11:25.810 95.00000% : 14358.342us 00:11:25.810 98.00000% : 15966.953us 00:11:25.810 99.00000% : 26333.556us 00:11:25.810 99.50000% : 28240.058us 00:11:25.810 99.90000% : 29789.091us 00:11:25.810 99.99000% : 30146.560us 00:11:25.810 99.99900% : 30265.716us 00:11:25.810 99.99990% : 30265.716us 00:11:25.810 99.99999% : 30265.716us 00:11:25.810 00:11:25.810 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:25.810 ============================================================================== 00:11:25.810 Range in us Cumulative IO count 00:11:25.810 8043.055 - 8102.633: 0.0368% ( 4) 00:11:25.810 8102.633 - 8162.211: 0.0919% ( 6) 00:11:25.810 8162.211 - 8221.789: 0.1379% ( 5) 00:11:25.810 8221.789 - 8281.367: 0.1746% ( 4) 00:11:25.810 8281.367 - 8340.945: 0.2114% ( 4) 00:11:25.810 8340.945 - 8400.524: 0.2665% ( 6) 00:11:25.810 8400.524 - 8460.102: 0.3585% ( 10) 00:11:25.810 8460.102 - 8519.680: 0.4136% ( 6) 00:11:25.810 8519.680 - 8579.258: 0.6710% ( 28) 00:11:25.810 8579.258 - 8638.836: 0.9283% ( 28) 00:11:25.810 8638.836 - 8698.415: 1.2592% ( 36) 00:11:25.810 8698.415 - 8757.993: 1.7555% ( 54) 00:11:25.810 8757.993 - 8817.571: 2.0956% ( 37) 00:11:25.810 8817.571 - 8877.149: 2.3438% ( 27) 00:11:25.810 8877.149 - 8936.727: 2.5735% ( 25) 00:11:25.810 8936.727 - 8996.305: 2.9596% ( 42) 00:11:25.810 8996.305 - 9055.884: 3.2904% ( 36) 00:11:25.810 9055.884 - 9115.462: 3.6581% ( 40) 00:11:25.810 9115.462 - 9175.040: 4.0349% ( 41) 00:11:25.810 9175.040 - 9234.618: 4.3934% ( 39) 00:11:25.810 9234.618 - 9294.196: 4.8438% ( 49) 00:11:25.810 9294.196 - 9353.775: 5.3309% ( 53) 00:11:25.810 9353.775 - 9413.353: 5.7261% ( 43) 00:11:25.810 9413.353 - 9472.931: 6.2776% ( 60) 00:11:25.810 9472.931 - 9532.509: 6.8750% ( 65) 00:11:25.810 9532.509 - 9592.087: 7.4449% ( 62) 00:11:25.810 9592.087 - 9651.665: 8.1893% ( 81) 00:11:25.810 9651.665 - 9711.244: 8.9154% ( 79) 00:11:25.810 9711.244 - 9770.822: 9.6599% ( 81) 00:11:25.810 9770.822 - 9830.400: 10.3493% ( 75) 00:11:25.810 9830.400 - 9889.978: 11.1765% ( 90) 00:11:25.810 9889.978 - 9949.556: 12.1324% ( 104) 00:11:25.810 9949.556 - 10009.135: 13.0699% ( 102) 00:11:25.810 10009.135 - 10068.713: 14.2004% ( 123) 00:11:25.810 10068.713 - 10128.291: 15.2390% ( 113) 00:11:25.810 10128.291 - 10187.869: 16.1489% ( 99) 00:11:25.810 10187.869 - 10247.447: 17.3254% ( 128) 00:11:25.810 10247.447 - 10307.025: 18.8327% ( 164) 00:11:25.810 10307.025 - 10366.604: 20.2390% ( 153) 00:11:25.810 10366.604 - 10426.182: 21.6452% ( 153) 00:11:25.810 10426.182 - 10485.760: 23.0882% ( 157) 00:11:25.810 10485.760 - 10545.338: 24.6140% ( 166) 00:11:25.810 10545.338 - 10604.916: 26.1305% ( 165) 00:11:25.810 10604.916 - 10664.495: 27.6195% ( 162) 00:11:25.810 10664.495 - 10724.073: 29.2096% ( 173) 00:11:25.810 10724.073 - 10783.651: 30.8548% ( 179) 00:11:25.810 10783.651 - 10843.229: 32.4449% ( 173) 00:11:25.810 10843.229 - 10902.807: 34.1728% ( 188) 00:11:25.810 10902.807 - 10962.385: 35.8915% ( 187) 00:11:25.810 10962.385 - 11021.964: 37.6195% ( 188) 00:11:25.810 11021.964 - 11081.542: 39.1636% ( 168) 00:11:25.810 11081.542 - 11141.120: 40.8364% ( 182) 00:11:25.810 11141.120 - 11200.698: 42.4081% ( 171) 00:11:25.810 11200.698 - 11260.276: 43.9246% ( 165) 00:11:25.810 11260.276 - 11319.855: 45.3860% ( 159) 00:11:25.810 11319.855 - 11379.433: 46.9577% ( 171) 00:11:25.810 11379.433 - 11439.011: 48.4007% ( 157) 00:11:25.810 11439.011 - 11498.589: 49.8805% ( 161) 00:11:25.810 11498.589 - 11558.167: 51.5074% ( 177) 00:11:25.810 11558.167 - 11617.745: 52.9412% ( 156) 00:11:25.810 11617.745 - 11677.324: 54.3750% ( 156) 00:11:25.810 11677.324 - 11736.902: 55.9283% ( 169) 00:11:25.810 11736.902 - 11796.480: 57.5368% ( 175) 00:11:25.810 11796.480 - 11856.058: 59.0901% ( 169) 00:11:25.810 11856.058 - 11915.636: 60.5974% ( 164) 00:11:25.810 11915.636 - 11975.215: 62.2610% ( 181) 00:11:25.810 11975.215 - 12034.793: 63.9522% ( 184) 00:11:25.810 12034.793 - 12094.371: 65.4688% ( 165) 00:11:25.810 12094.371 - 12153.949: 66.9301% ( 159) 00:11:25.810 12153.949 - 12213.527: 68.4467% ( 165) 00:11:25.810 12213.527 - 12273.105: 69.8254% ( 150) 00:11:25.810 12273.105 - 12332.684: 71.3235% ( 163) 00:11:25.810 12332.684 - 12392.262: 72.7206% ( 152) 00:11:25.810 12392.262 - 12451.840: 74.1544% ( 156) 00:11:25.810 12451.840 - 12511.418: 75.4504% ( 141) 00:11:25.810 12511.418 - 12570.996: 76.6820% ( 134) 00:11:25.810 12570.996 - 12630.575: 77.8768% ( 130) 00:11:25.810 12630.575 - 12690.153: 78.9890% ( 121) 00:11:25.810 12690.153 - 12749.731: 80.1654% ( 128) 00:11:25.810 12749.731 - 12809.309: 81.0570% ( 97) 00:11:25.810 12809.309 - 12868.887: 82.0496% ( 108) 00:11:25.810 12868.887 - 12928.465: 82.9871% ( 102) 00:11:25.810 12928.465 - 12988.044: 83.9062% ( 100) 00:11:25.810 12988.044 - 13047.622: 84.7702% ( 94) 00:11:25.810 13047.622 - 13107.200: 85.5331% ( 83) 00:11:25.810 13107.200 - 13166.778: 86.2224% ( 75) 00:11:25.810 13166.778 - 13226.356: 86.8199% ( 65) 00:11:25.810 13226.356 - 13285.935: 87.3713% ( 60) 00:11:25.810 13285.935 - 13345.513: 87.9963% ( 68) 00:11:25.810 13345.513 - 13405.091: 88.5202% ( 57) 00:11:25.810 13405.091 - 13464.669: 88.9890% ( 51) 00:11:25.810 13464.669 - 13524.247: 89.4026% ( 45) 00:11:25.810 13524.247 - 13583.825: 89.8713% ( 51) 00:11:25.810 13583.825 - 13643.404: 90.2574% ( 42) 00:11:25.810 13643.404 - 13702.982: 90.6710% ( 45) 00:11:25.810 13702.982 - 13762.560: 91.0386% ( 40) 00:11:25.810 13762.560 - 13822.138: 91.3787% ( 37) 00:11:25.810 13822.138 - 13881.716: 91.7739% ( 43) 00:11:25.810 13881.716 - 13941.295: 92.1507% ( 41) 00:11:25.810 13941.295 - 14000.873: 92.4540% ( 33) 00:11:25.810 14000.873 - 14060.451: 92.7849% ( 36) 00:11:25.810 14060.451 - 14120.029: 93.1250% ( 37) 00:11:25.810 14120.029 - 14179.607: 93.4467% ( 35) 00:11:25.810 14179.607 - 14239.185: 93.7224% ( 30) 00:11:25.810 14239.185 - 14298.764: 94.0349% ( 34) 00:11:25.810 14298.764 - 14358.342: 94.2923% ( 28) 00:11:25.810 14358.342 - 14417.920: 94.5037% ( 23) 00:11:25.810 14417.920 - 14477.498: 94.7886% ( 31) 00:11:25.810 14477.498 - 14537.076: 95.0092% ( 24) 00:11:25.810 14537.076 - 14596.655: 95.2206% ( 23) 00:11:25.810 14596.655 - 14656.233: 95.4044% ( 20) 00:11:25.810 14656.233 - 14715.811: 95.5699% ( 18) 00:11:25.810 14715.811 - 14775.389: 95.7629% ( 21) 00:11:25.810 14775.389 - 14834.967: 95.9099% ( 16) 00:11:25.810 14834.967 - 14894.545: 96.0202% ( 12) 00:11:25.810 14894.545 - 14954.124: 96.1857% ( 18) 00:11:25.810 14954.124 - 15013.702: 96.3603% ( 19) 00:11:25.810 15013.702 - 15073.280: 96.4982% ( 15) 00:11:25.810 15073.280 - 15132.858: 96.6360% ( 15) 00:11:25.810 15132.858 - 15192.436: 96.7739% ( 15) 00:11:25.810 15192.436 - 15252.015: 96.9026% ( 14) 00:11:25.810 15252.015 - 15371.171: 97.1140% ( 23) 00:11:25.810 15371.171 - 15490.327: 97.2794% ( 18) 00:11:25.810 15490.327 - 15609.484: 97.4540% ( 19) 00:11:25.810 15609.484 - 15728.640: 97.6011% ( 16) 00:11:25.810 15728.640 - 15847.796: 97.7665% ( 18) 00:11:25.810 15847.796 - 15966.953: 97.9504% ( 20) 00:11:25.810 15966.953 - 16086.109: 98.0790% ( 14) 00:11:25.810 16086.109 - 16205.265: 98.2629% ( 20) 00:11:25.810 16205.265 - 16324.422: 98.3824% ( 13) 00:11:25.810 16324.422 - 16443.578: 98.5110% ( 14) 00:11:25.810 16443.578 - 16562.735: 98.5938% ( 9) 00:11:25.810 16562.735 - 16681.891: 98.6765% ( 9) 00:11:25.810 16681.891 - 16801.047: 98.7316% ( 6) 00:11:25.810 16801.047 - 16920.204: 98.7776% ( 5) 00:11:25.811 16920.204 - 17039.360: 98.8235% ( 5) 00:11:25.811 29550.778 - 29669.935: 98.8327% ( 1) 00:11:25.811 29669.935 - 29789.091: 98.8603% ( 3) 00:11:25.811 29789.091 - 29908.247: 98.8879% ( 3) 00:11:25.811 29908.247 - 30027.404: 98.8971% ( 1) 00:11:25.811 30027.404 - 30146.560: 98.9062% ( 1) 00:11:25.811 30146.560 - 30265.716: 98.9246% ( 2) 00:11:25.811 30265.716 - 30384.873: 98.9614% ( 4) 00:11:25.811 30384.873 - 30504.029: 98.9890% ( 3) 00:11:25.811 30504.029 - 30742.342: 99.0257% ( 4) 00:11:25.811 30742.342 - 30980.655: 99.0809% ( 6) 00:11:25.811 30980.655 - 31218.967: 99.1176% ( 4) 00:11:25.811 31218.967 - 31457.280: 99.1636% ( 5) 00:11:25.811 31457.280 - 31695.593: 99.2096% ( 5) 00:11:25.811 31695.593 - 31933.905: 99.2463% ( 4) 00:11:25.811 31933.905 - 32172.218: 99.3015% ( 6) 00:11:25.811 32172.218 - 32410.531: 99.3566% ( 6) 00:11:25.811 32410.531 - 32648.844: 99.4026% ( 5) 00:11:25.811 32648.844 - 32887.156: 99.4761% ( 8) 00:11:25.811 32887.156 - 33125.469: 99.5221% ( 5) 00:11:25.811 33125.469 - 33363.782: 99.5772% ( 6) 00:11:25.811 33363.782 - 33602.095: 99.6324% ( 6) 00:11:25.811 33602.095 - 33840.407: 99.6875% ( 6) 00:11:25.811 33840.407 - 34078.720: 99.7426% ( 6) 00:11:25.811 34078.720 - 34317.033: 99.8162% ( 8) 00:11:25.811 34317.033 - 34555.345: 99.8805% ( 7) 00:11:25.811 34555.345 - 34793.658: 99.9357% ( 6) 00:11:25.811 34793.658 - 35031.971: 99.9908% ( 6) 00:11:25.811 35031.971 - 35270.284: 100.0000% ( 1) 00:11:25.811 00:11:25.811 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:11:25.811 ============================================================================== 00:11:25.811 Range in us Cumulative IO count 00:11:25.811 8340.945 - 8400.524: 0.0092% ( 1) 00:11:25.811 8400.524 - 8460.102: 0.0368% ( 3) 00:11:25.811 8460.102 - 8519.680: 0.1379% ( 11) 00:11:25.811 8519.680 - 8579.258: 0.2390% ( 11) 00:11:25.811 8579.258 - 8638.836: 0.3585% ( 13) 00:11:25.811 8638.836 - 8698.415: 0.6158% ( 28) 00:11:25.811 8698.415 - 8757.993: 0.8272% ( 23) 00:11:25.811 8757.993 - 8817.571: 1.0938% ( 29) 00:11:25.811 8817.571 - 8877.149: 1.4246% ( 36) 00:11:25.811 8877.149 - 8936.727: 1.7739% ( 38) 00:11:25.811 8936.727 - 8996.305: 2.1232% ( 38) 00:11:25.811 8996.305 - 9055.884: 2.4357% ( 34) 00:11:25.811 9055.884 - 9115.462: 2.7665% ( 36) 00:11:25.811 9115.462 - 9175.040: 3.0882% ( 35) 00:11:25.811 9175.040 - 9234.618: 3.4467% ( 39) 00:11:25.811 9234.618 - 9294.196: 3.7500% ( 33) 00:11:25.811 9294.196 - 9353.775: 4.0441% ( 32) 00:11:25.811 9353.775 - 9413.353: 4.4485% ( 44) 00:11:25.811 9413.353 - 9472.931: 4.7702% ( 35) 00:11:25.811 9472.931 - 9532.509: 5.2022% ( 47) 00:11:25.811 9532.509 - 9592.087: 5.7629% ( 61) 00:11:25.811 9592.087 - 9651.665: 6.3511% ( 64) 00:11:25.811 9651.665 - 9711.244: 7.0680% ( 78) 00:11:25.811 9711.244 - 9770.822: 7.7206% ( 71) 00:11:25.811 9770.822 - 9830.400: 8.6213% ( 98) 00:11:25.811 9830.400 - 9889.978: 9.4761% ( 93) 00:11:25.811 9889.978 - 9949.556: 10.3493% ( 95) 00:11:25.811 9949.556 - 10009.135: 11.2776% ( 101) 00:11:25.811 10009.135 - 10068.713: 12.1875% ( 99) 00:11:25.811 10068.713 - 10128.291: 13.2261% ( 113) 00:11:25.811 10128.291 - 10187.869: 14.2647% ( 113) 00:11:25.811 10187.869 - 10247.447: 15.2298% ( 105) 00:11:25.811 10247.447 - 10307.025: 16.3051% ( 117) 00:11:25.811 10307.025 - 10366.604: 17.5184% ( 132) 00:11:25.811 10366.604 - 10426.182: 18.8143% ( 141) 00:11:25.811 10426.182 - 10485.760: 20.4963% ( 183) 00:11:25.811 10485.760 - 10545.338: 22.2978% ( 196) 00:11:25.811 10545.338 - 10604.916: 23.9890% ( 184) 00:11:25.811 10604.916 - 10664.495: 25.5515% ( 170) 00:11:25.811 10664.495 - 10724.073: 27.1875% ( 178) 00:11:25.811 10724.073 - 10783.651: 28.9798% ( 195) 00:11:25.811 10783.651 - 10843.229: 30.7077% ( 188) 00:11:25.811 10843.229 - 10902.807: 32.3162% ( 175) 00:11:25.811 10902.807 - 10962.385: 34.1360% ( 198) 00:11:25.811 10962.385 - 11021.964: 35.8824% ( 190) 00:11:25.811 11021.964 - 11081.542: 37.6379% ( 191) 00:11:25.811 11081.542 - 11141.120: 39.4853% ( 201) 00:11:25.811 11141.120 - 11200.698: 41.2684% ( 194) 00:11:25.811 11200.698 - 11260.276: 43.0055% ( 189) 00:11:25.811 11260.276 - 11319.855: 44.7151% ( 186) 00:11:25.811 11319.855 - 11379.433: 46.4522% ( 189) 00:11:25.811 11379.433 - 11439.011: 48.1342% ( 183) 00:11:25.811 11439.011 - 11498.589: 49.8438% ( 186) 00:11:25.811 11498.589 - 11558.167: 51.4890% ( 179) 00:11:25.811 11558.167 - 11617.745: 53.1526% ( 181) 00:11:25.811 11617.745 - 11677.324: 54.8346% ( 183) 00:11:25.811 11677.324 - 11736.902: 56.6085% ( 193) 00:11:25.811 11736.902 - 11796.480: 58.3456% ( 189) 00:11:25.811 11796.480 - 11856.058: 60.1195% ( 193) 00:11:25.811 11856.058 - 11915.636: 61.9853% ( 203) 00:11:25.811 11915.636 - 11975.215: 63.8419% ( 202) 00:11:25.811 11975.215 - 12034.793: 65.5974% ( 191) 00:11:25.811 12034.793 - 12094.371: 67.2243% ( 177) 00:11:25.811 12094.371 - 12153.949: 68.8787% ( 180) 00:11:25.811 12153.949 - 12213.527: 70.4779% ( 174) 00:11:25.811 12213.527 - 12273.105: 71.9669% ( 162) 00:11:25.811 12273.105 - 12332.684: 73.4099% ( 157) 00:11:25.811 12332.684 - 12392.262: 74.8438% ( 156) 00:11:25.811 12392.262 - 12451.840: 76.2868% ( 157) 00:11:25.811 12451.840 - 12511.418: 77.6195% ( 145) 00:11:25.811 12511.418 - 12570.996: 78.8327% ( 132) 00:11:25.811 12570.996 - 12630.575: 79.9908% ( 126) 00:11:25.811 12630.575 - 12690.153: 81.1213% ( 123) 00:11:25.811 12690.153 - 12749.731: 81.9485% ( 90) 00:11:25.811 12749.731 - 12809.309: 82.8309% ( 96) 00:11:25.811 12809.309 - 12868.887: 83.7040% ( 95) 00:11:25.811 12868.887 - 12928.465: 84.4210% ( 78) 00:11:25.811 12928.465 - 12988.044: 85.1195% ( 76) 00:11:25.811 12988.044 - 13047.622: 85.7261% ( 66) 00:11:25.811 13047.622 - 13107.200: 86.2592% ( 58) 00:11:25.811 13107.200 - 13166.778: 86.7647% ( 55) 00:11:25.811 13166.778 - 13226.356: 87.2243% ( 50) 00:11:25.811 13226.356 - 13285.935: 87.6838% ( 50) 00:11:25.811 13285.935 - 13345.513: 88.2445% ( 61) 00:11:25.811 13345.513 - 13405.091: 88.7684% ( 57) 00:11:25.811 13405.091 - 13464.669: 89.2831% ( 56) 00:11:25.811 13464.669 - 13524.247: 89.7794% ( 54) 00:11:25.811 13524.247 - 13583.825: 90.2390% ( 50) 00:11:25.811 13583.825 - 13643.404: 90.6618% ( 46) 00:11:25.811 13643.404 - 13702.982: 91.0110% ( 38) 00:11:25.811 13702.982 - 13762.560: 91.3787% ( 40) 00:11:25.811 13762.560 - 13822.138: 91.7371% ( 39) 00:11:25.811 13822.138 - 13881.716: 92.0588% ( 35) 00:11:25.811 13881.716 - 13941.295: 92.4173% ( 39) 00:11:25.811 13941.295 - 14000.873: 92.7941% ( 41) 00:11:25.811 14000.873 - 14060.451: 93.1158% ( 35) 00:11:25.811 14060.451 - 14120.029: 93.5110% ( 43) 00:11:25.811 14120.029 - 14179.607: 93.8051% ( 32) 00:11:25.811 14179.607 - 14239.185: 94.1360% ( 36) 00:11:25.811 14239.185 - 14298.764: 94.4485% ( 34) 00:11:25.811 14298.764 - 14358.342: 94.7335% ( 31) 00:11:25.811 14358.342 - 14417.920: 95.0184% ( 31) 00:11:25.811 14417.920 - 14477.498: 95.2574% ( 26) 00:11:25.811 14477.498 - 14537.076: 95.4688% ( 23) 00:11:25.811 14537.076 - 14596.655: 95.6710% ( 22) 00:11:25.811 14596.655 - 14656.233: 95.8640% ( 21) 00:11:25.811 14656.233 - 14715.811: 96.0386% ( 19) 00:11:25.811 14715.811 - 14775.389: 96.1949% ( 17) 00:11:25.811 14775.389 - 14834.967: 96.3419% ( 16) 00:11:25.811 14834.967 - 14894.545: 96.5257% ( 20) 00:11:25.811 14894.545 - 14954.124: 96.6636% ( 15) 00:11:25.811 14954.124 - 15013.702: 96.8107% ( 16) 00:11:25.811 15013.702 - 15073.280: 96.9761% ( 18) 00:11:25.811 15073.280 - 15132.858: 97.1140% ( 15) 00:11:25.811 15132.858 - 15192.436: 97.2426% ( 14) 00:11:25.811 15192.436 - 15252.015: 97.3346% ( 10) 00:11:25.811 15252.015 - 15371.171: 97.5643% ( 25) 00:11:25.811 15371.171 - 15490.327: 97.7665% ( 22) 00:11:25.811 15490.327 - 15609.484: 97.9871% ( 24) 00:11:25.811 15609.484 - 15728.640: 98.1893% ( 22) 00:11:25.811 15728.640 - 15847.796: 98.3180% ( 14) 00:11:25.811 15847.796 - 15966.953: 98.4375% ( 13) 00:11:25.811 15966.953 - 16086.109: 98.5570% ( 13) 00:11:25.811 16086.109 - 16205.265: 98.6581% ( 11) 00:11:25.811 16205.265 - 16324.422: 98.7500% ( 10) 00:11:25.811 16324.422 - 16443.578: 98.8143% ( 7) 00:11:25.811 16443.578 - 16562.735: 98.8235% ( 1) 00:11:25.811 28359.215 - 28478.371: 98.8419% ( 2) 00:11:25.811 28478.371 - 28597.527: 98.8695% ( 3) 00:11:25.811 28597.527 - 28716.684: 98.8971% ( 3) 00:11:25.811 28716.684 - 28835.840: 98.9246% ( 3) 00:11:25.811 28835.840 - 28954.996: 98.9614% ( 4) 00:11:25.811 28954.996 - 29074.153: 98.9890% ( 3) 00:11:25.811 29074.153 - 29193.309: 99.0165% ( 3) 00:11:25.811 29193.309 - 29312.465: 99.0441% ( 3) 00:11:25.811 29312.465 - 29431.622: 99.0809% ( 4) 00:11:25.811 29431.622 - 29550.778: 99.1085% ( 3) 00:11:25.811 29550.778 - 29669.935: 99.1360% ( 3) 00:11:25.811 29669.935 - 29789.091: 99.1636% ( 3) 00:11:25.811 29789.091 - 29908.247: 99.1912% ( 3) 00:11:25.811 29908.247 - 30027.404: 99.2188% ( 3) 00:11:25.811 30027.404 - 30146.560: 99.2463% ( 3) 00:11:25.811 30146.560 - 30265.716: 99.2831% ( 4) 00:11:25.811 30265.716 - 30384.873: 99.3107% ( 3) 00:11:25.811 30384.873 - 30504.029: 99.3382% ( 3) 00:11:25.811 30504.029 - 30742.342: 99.3842% ( 5) 00:11:25.811 30742.342 - 30980.655: 99.4485% ( 7) 00:11:25.811 30980.655 - 31218.967: 99.5037% ( 6) 00:11:25.811 31218.967 - 31457.280: 99.5588% ( 6) 00:11:25.811 31457.280 - 31695.593: 99.6232% ( 7) 00:11:25.811 31695.593 - 31933.905: 99.6783% ( 6) 00:11:25.811 31933.905 - 32172.218: 99.7335% ( 6) 00:11:25.811 32172.218 - 32410.531: 99.7978% ( 7) 00:11:25.811 32410.531 - 32648.844: 99.8621% ( 7) 00:11:25.812 32648.844 - 32887.156: 99.9265% ( 7) 00:11:25.812 32887.156 - 33125.469: 99.9816% ( 6) 00:11:25.812 33125.469 - 33363.782: 100.0000% ( 2) 00:11:25.812 00:11:25.812 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:11:25.812 ============================================================================== 00:11:25.812 Range in us Cumulative IO count 00:11:25.812 8221.789 - 8281.367: 0.0092% ( 1) 00:11:25.812 8281.367 - 8340.945: 0.0551% ( 5) 00:11:25.812 8340.945 - 8400.524: 0.1379% ( 9) 00:11:25.812 8400.524 - 8460.102: 0.2574% ( 13) 00:11:25.812 8460.102 - 8519.680: 0.3493% ( 10) 00:11:25.812 8519.680 - 8579.258: 0.4320% ( 9) 00:11:25.812 8579.258 - 8638.836: 0.5699% ( 15) 00:11:25.812 8638.836 - 8698.415: 0.7169% ( 16) 00:11:25.812 8698.415 - 8757.993: 0.9375% ( 24) 00:11:25.812 8757.993 - 8817.571: 1.1857% ( 27) 00:11:25.812 8817.571 - 8877.149: 1.4062% ( 24) 00:11:25.812 8877.149 - 8936.727: 1.6452% ( 26) 00:11:25.812 8936.727 - 8996.305: 1.8658% ( 24) 00:11:25.812 8996.305 - 9055.884: 2.1599% ( 32) 00:11:25.812 9055.884 - 9115.462: 2.4724% ( 34) 00:11:25.812 9115.462 - 9175.040: 2.8768% ( 44) 00:11:25.812 9175.040 - 9234.618: 3.3456% ( 51) 00:11:25.812 9234.618 - 9294.196: 3.8971% ( 60) 00:11:25.812 9294.196 - 9353.775: 4.3474% ( 49) 00:11:25.812 9353.775 - 9413.353: 4.8070% ( 50) 00:11:25.812 9413.353 - 9472.931: 5.2941% ( 53) 00:11:25.812 9472.931 - 9532.509: 5.7996% ( 55) 00:11:25.812 9532.509 - 9592.087: 6.3327% ( 58) 00:11:25.812 9592.087 - 9651.665: 6.9118% ( 63) 00:11:25.812 9651.665 - 9711.244: 7.6379% ( 79) 00:11:25.812 9711.244 - 9770.822: 8.2629% ( 68) 00:11:25.812 9770.822 - 9830.400: 8.9706% ( 77) 00:11:25.812 9830.400 - 9889.978: 9.6048% ( 69) 00:11:25.812 9889.978 - 9949.556: 10.3217% ( 78) 00:11:25.812 9949.556 - 10009.135: 11.1029% ( 85) 00:11:25.812 10009.135 - 10068.713: 12.1140% ( 110) 00:11:25.812 10068.713 - 10128.291: 13.1893% ( 117) 00:11:25.812 10128.291 - 10187.869: 14.5037% ( 143) 00:11:25.812 10187.869 - 10247.447: 15.8364% ( 145) 00:11:25.812 10247.447 - 10307.025: 17.3438% ( 164) 00:11:25.812 10307.025 - 10366.604: 18.7132% ( 149) 00:11:25.812 10366.604 - 10426.182: 20.1471% ( 156) 00:11:25.812 10426.182 - 10485.760: 21.5533% ( 153) 00:11:25.812 10485.760 - 10545.338: 22.9779% ( 155) 00:11:25.812 10545.338 - 10604.916: 24.4945% ( 165) 00:11:25.812 10604.916 - 10664.495: 26.1305% ( 178) 00:11:25.812 10664.495 - 10724.073: 27.9871% ( 202) 00:11:25.812 10724.073 - 10783.651: 29.9540% ( 214) 00:11:25.812 10783.651 - 10843.229: 31.7647% ( 197) 00:11:25.812 10843.229 - 10902.807: 33.5202% ( 191) 00:11:25.812 10902.807 - 10962.385: 35.2849% ( 192) 00:11:25.812 10962.385 - 11021.964: 37.0221% ( 189) 00:11:25.812 11021.964 - 11081.542: 38.7408% ( 187) 00:11:25.812 11081.542 - 11141.120: 40.5239% ( 194) 00:11:25.812 11141.120 - 11200.698: 42.1599% ( 178) 00:11:25.812 11200.698 - 11260.276: 43.8143% ( 180) 00:11:25.812 11260.276 - 11319.855: 45.7169% ( 207) 00:11:25.812 11319.855 - 11379.433: 47.4632% ( 190) 00:11:25.812 11379.433 - 11439.011: 49.1360% ( 182) 00:11:25.812 11439.011 - 11498.589: 50.8364% ( 185) 00:11:25.812 11498.589 - 11558.167: 52.5092% ( 182) 00:11:25.812 11558.167 - 11617.745: 54.2923% ( 194) 00:11:25.812 11617.745 - 11677.324: 55.9283% ( 178) 00:11:25.812 11677.324 - 11736.902: 57.6011% ( 182) 00:11:25.812 11736.902 - 11796.480: 59.2004% ( 174) 00:11:25.812 11796.480 - 11856.058: 60.8548% ( 180) 00:11:25.812 11856.058 - 11915.636: 62.6195% ( 192) 00:11:25.812 11915.636 - 11975.215: 64.3842% ( 192) 00:11:25.812 11975.215 - 12034.793: 66.0202% ( 178) 00:11:25.812 12034.793 - 12094.371: 67.5919% ( 171) 00:11:25.812 12094.371 - 12153.949: 69.1728% ( 172) 00:11:25.812 12153.949 - 12213.527: 70.7261% ( 169) 00:11:25.812 12213.527 - 12273.105: 72.1507% ( 155) 00:11:25.812 12273.105 - 12332.684: 73.5938% ( 157) 00:11:25.812 12332.684 - 12392.262: 75.0368% ( 157) 00:11:25.812 12392.262 - 12451.840: 76.5074% ( 160) 00:11:25.812 12451.840 - 12511.418: 77.9228% ( 154) 00:11:25.812 12511.418 - 12570.996: 79.3015% ( 150) 00:11:25.812 12570.996 - 12630.575: 80.4963% ( 130) 00:11:25.812 12630.575 - 12690.153: 81.6636% ( 127) 00:11:25.812 12690.153 - 12749.731: 82.7206% ( 115) 00:11:25.812 12749.731 - 12809.309: 83.5110% ( 86) 00:11:25.812 12809.309 - 12868.887: 84.2647% ( 82) 00:11:25.812 12868.887 - 12928.465: 84.9449% ( 74) 00:11:25.812 12928.465 - 12988.044: 85.5423% ( 65) 00:11:25.812 12988.044 - 13047.622: 86.0754% ( 58) 00:11:25.812 13047.622 - 13107.200: 86.5809% ( 55) 00:11:25.812 13107.200 - 13166.778: 87.0956% ( 56) 00:11:25.812 13166.778 - 13226.356: 87.6011% ( 55) 00:11:25.812 13226.356 - 13285.935: 88.1158% ( 56) 00:11:25.812 13285.935 - 13345.513: 88.6121% ( 54) 00:11:25.812 13345.513 - 13405.091: 89.1544% ( 59) 00:11:25.812 13405.091 - 13464.669: 89.6507% ( 54) 00:11:25.812 13464.669 - 13524.247: 90.0919% ( 48) 00:11:25.812 13524.247 - 13583.825: 90.5239% ( 47) 00:11:25.812 13583.825 - 13643.404: 91.0294% ( 55) 00:11:25.812 13643.404 - 13702.982: 91.4614% ( 47) 00:11:25.812 13702.982 - 13762.560: 91.8934% ( 47) 00:11:25.812 13762.560 - 13822.138: 92.2702% ( 41) 00:11:25.812 13822.138 - 13881.716: 92.6654% ( 43) 00:11:25.812 13881.716 - 13941.295: 92.9779% ( 34) 00:11:25.812 13941.295 - 14000.873: 93.2996% ( 35) 00:11:25.812 14000.873 - 14060.451: 93.5938% ( 32) 00:11:25.812 14060.451 - 14120.029: 93.8695% ( 30) 00:11:25.812 14120.029 - 14179.607: 94.1452% ( 30) 00:11:25.812 14179.607 - 14239.185: 94.3750% ( 25) 00:11:25.812 14239.185 - 14298.764: 94.6048% ( 25) 00:11:25.812 14298.764 - 14358.342: 94.7886% ( 20) 00:11:25.812 14358.342 - 14417.920: 95.0000% ( 23) 00:11:25.812 14417.920 - 14477.498: 95.2114% ( 23) 00:11:25.812 14477.498 - 14537.076: 95.4320% ( 24) 00:11:25.812 14537.076 - 14596.655: 95.6250% ( 21) 00:11:25.812 14596.655 - 14656.233: 95.8088% ( 20) 00:11:25.812 14656.233 - 14715.811: 96.0202% ( 23) 00:11:25.812 14715.811 - 14775.389: 96.1949% ( 19) 00:11:25.812 14775.389 - 14834.967: 96.3419% ( 16) 00:11:25.812 14834.967 - 14894.545: 96.4706% ( 14) 00:11:25.812 14894.545 - 14954.124: 96.6176% ( 16) 00:11:25.812 14954.124 - 15013.702: 96.7555% ( 15) 00:11:25.812 15013.702 - 15073.280: 96.9118% ( 17) 00:11:25.812 15073.280 - 15132.858: 97.0404% ( 14) 00:11:25.812 15132.858 - 15192.436: 97.1691% ( 14) 00:11:25.812 15192.436 - 15252.015: 97.3070% ( 15) 00:11:25.812 15252.015 - 15371.171: 97.5276% ( 24) 00:11:25.812 15371.171 - 15490.327: 97.7206% ( 21) 00:11:25.812 15490.327 - 15609.484: 97.9412% ( 24) 00:11:25.812 15609.484 - 15728.640: 98.1526% ( 23) 00:11:25.812 15728.640 - 15847.796: 98.3272% ( 19) 00:11:25.812 15847.796 - 15966.953: 98.4743% ( 16) 00:11:25.812 15966.953 - 16086.109: 98.6121% ( 15) 00:11:25.812 16086.109 - 16205.265: 98.7132% ( 11) 00:11:25.812 16205.265 - 16324.422: 98.7868% ( 8) 00:11:25.812 16324.422 - 16443.578: 98.8235% ( 4) 00:11:25.812 29431.622 - 29550.778: 98.8511% ( 3) 00:11:25.812 29550.778 - 29669.935: 98.8879% ( 4) 00:11:25.812 29669.935 - 29789.091: 98.9154% ( 3) 00:11:25.812 29789.091 - 29908.247: 98.9338% ( 2) 00:11:25.812 29908.247 - 30027.404: 98.9706% ( 4) 00:11:25.812 30027.404 - 30146.560: 98.9890% ( 2) 00:11:25.812 30146.560 - 30265.716: 99.0165% ( 3) 00:11:25.812 30265.716 - 30384.873: 99.0441% ( 3) 00:11:25.812 30384.873 - 30504.029: 99.0717% ( 3) 00:11:25.812 30504.029 - 30742.342: 99.1360% ( 7) 00:11:25.812 30742.342 - 30980.655: 99.1912% ( 6) 00:11:25.812 30980.655 - 31218.967: 99.2463% ( 6) 00:11:25.812 31218.967 - 31457.280: 99.3015% ( 6) 00:11:25.812 31457.280 - 31695.593: 99.3658% ( 7) 00:11:25.812 31695.593 - 31933.905: 99.4210% ( 6) 00:11:25.812 31933.905 - 32172.218: 99.4761% ( 6) 00:11:25.812 32172.218 - 32410.531: 99.5221% ( 5) 00:11:25.812 32410.531 - 32648.844: 99.5772% ( 6) 00:11:25.812 32648.844 - 32887.156: 99.6415% ( 7) 00:11:25.812 32887.156 - 33125.469: 99.7059% ( 7) 00:11:25.812 33125.469 - 33363.782: 99.7518% ( 5) 00:11:25.812 33363.782 - 33602.095: 99.8162% ( 7) 00:11:25.812 33602.095 - 33840.407: 99.8621% ( 5) 00:11:25.812 33840.407 - 34078.720: 99.9081% ( 5) 00:11:25.812 34078.720 - 34317.033: 99.9724% ( 7) 00:11:25.812 34317.033 - 34555.345: 100.0000% ( 3) 00:11:25.812 00:11:25.812 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:11:25.812 ============================================================================== 00:11:25.812 Range in us Cumulative IO count 00:11:25.812 8400.524 - 8460.102: 0.0460% ( 5) 00:11:25.812 8460.102 - 8519.680: 0.1195% ( 8) 00:11:25.812 8519.680 - 8579.258: 0.2298% ( 12) 00:11:25.812 8579.258 - 8638.836: 0.3585% ( 14) 00:11:25.812 8638.836 - 8698.415: 0.5055% ( 16) 00:11:25.812 8698.415 - 8757.993: 0.6618% ( 17) 00:11:25.812 8757.993 - 8817.571: 0.8088% ( 16) 00:11:25.812 8817.571 - 8877.149: 1.0386% ( 25) 00:11:25.812 8877.149 - 8936.727: 1.3603% ( 35) 00:11:25.812 8936.727 - 8996.305: 1.7463% ( 42) 00:11:25.812 8996.305 - 9055.884: 2.2426% ( 54) 00:11:25.812 9055.884 - 9115.462: 2.7114% ( 51) 00:11:25.812 9115.462 - 9175.040: 3.1985% ( 53) 00:11:25.812 9175.040 - 9234.618: 3.6857% ( 53) 00:11:25.812 9234.618 - 9294.196: 4.1268% ( 48) 00:11:25.812 9294.196 - 9353.775: 4.5404% ( 45) 00:11:25.812 9353.775 - 9413.353: 4.9632% ( 46) 00:11:25.812 9413.353 - 9472.931: 5.3768% ( 45) 00:11:25.812 9472.931 - 9532.509: 5.8272% ( 49) 00:11:25.812 9532.509 - 9592.087: 6.2868% ( 50) 00:11:25.813 9592.087 - 9651.665: 6.7831% ( 54) 00:11:25.813 9651.665 - 9711.244: 7.3621% ( 63) 00:11:25.813 9711.244 - 9770.822: 8.0147% ( 71) 00:11:25.813 9770.822 - 9830.400: 8.6949% ( 74) 00:11:25.813 9830.400 - 9889.978: 9.4026% ( 77) 00:11:25.813 9889.978 - 9949.556: 10.0735% ( 73) 00:11:25.813 9949.556 - 10009.135: 10.9375% ( 94) 00:11:25.813 10009.135 - 10068.713: 11.8934% ( 104) 00:11:25.813 10068.713 - 10128.291: 13.0331% ( 124) 00:11:25.813 10128.291 - 10187.869: 14.1820% ( 125) 00:11:25.813 10187.869 - 10247.447: 15.5607% ( 150) 00:11:25.813 10247.447 - 10307.025: 16.8015% ( 135) 00:11:25.813 10307.025 - 10366.604: 18.1434% ( 146) 00:11:25.813 10366.604 - 10426.182: 19.5588% ( 154) 00:11:25.813 10426.182 - 10485.760: 21.0570% ( 163) 00:11:25.813 10485.760 - 10545.338: 22.5000% ( 157) 00:11:25.813 10545.338 - 10604.916: 24.0349% ( 167) 00:11:25.813 10604.916 - 10664.495: 25.6066% ( 171) 00:11:25.813 10664.495 - 10724.073: 27.3805% ( 193) 00:11:25.813 10724.073 - 10783.651: 29.2739% ( 206) 00:11:25.813 10783.651 - 10843.229: 31.0386% ( 192) 00:11:25.813 10843.229 - 10902.807: 32.7849% ( 190) 00:11:25.813 10902.807 - 10962.385: 34.6140% ( 199) 00:11:25.813 10962.385 - 11021.964: 36.5441% ( 210) 00:11:25.813 11021.964 - 11081.542: 38.4559% ( 208) 00:11:25.813 11081.542 - 11141.120: 40.2941% ( 200) 00:11:25.813 11141.120 - 11200.698: 41.9485% ( 180) 00:11:25.813 11200.698 - 11260.276: 43.6029% ( 180) 00:11:25.813 11260.276 - 11319.855: 45.2298% ( 177) 00:11:25.813 11319.855 - 11379.433: 46.8658% ( 178) 00:11:25.813 11379.433 - 11439.011: 48.5478% ( 183) 00:11:25.813 11439.011 - 11498.589: 50.2482% ( 185) 00:11:25.813 11498.589 - 11558.167: 51.8934% ( 179) 00:11:25.813 11558.167 - 11617.745: 53.4467% ( 169) 00:11:25.813 11617.745 - 11677.324: 55.0735% ( 177) 00:11:25.813 11677.324 - 11736.902: 56.6912% ( 176) 00:11:25.813 11736.902 - 11796.480: 58.2353% ( 168) 00:11:25.813 11796.480 - 11856.058: 59.9357% ( 185) 00:11:25.813 11856.058 - 11915.636: 61.6268% ( 184) 00:11:25.813 11915.636 - 11975.215: 63.4467% ( 198) 00:11:25.813 11975.215 - 12034.793: 65.1654% ( 187) 00:11:25.813 12034.793 - 12094.371: 66.8934% ( 188) 00:11:25.813 12094.371 - 12153.949: 68.6121% ( 187) 00:11:25.813 12153.949 - 12213.527: 70.2849% ( 182) 00:11:25.813 12213.527 - 12273.105: 71.9577% ( 182) 00:11:25.813 12273.105 - 12332.684: 73.4651% ( 164) 00:11:25.813 12332.684 - 12392.262: 74.8805% ( 154) 00:11:25.813 12392.262 - 12451.840: 76.3695% ( 162) 00:11:25.813 12451.840 - 12511.418: 77.7298% ( 148) 00:11:25.813 12511.418 - 12570.996: 79.0901% ( 148) 00:11:25.813 12570.996 - 12630.575: 80.3768% ( 140) 00:11:25.813 12630.575 - 12690.153: 81.5349% ( 126) 00:11:25.813 12690.153 - 12749.731: 82.6195% ( 118) 00:11:25.813 12749.731 - 12809.309: 83.5754% ( 104) 00:11:25.813 12809.309 - 12868.887: 84.4393% ( 94) 00:11:25.813 12868.887 - 12928.465: 85.2298% ( 86) 00:11:25.813 12928.465 - 12988.044: 85.8548% ( 68) 00:11:25.813 12988.044 - 13047.622: 86.4338% ( 63) 00:11:25.813 13047.622 - 13107.200: 87.0312% ( 65) 00:11:25.813 13107.200 - 13166.778: 87.6195% ( 64) 00:11:25.813 13166.778 - 13226.356: 88.1710% ( 60) 00:11:25.813 13226.356 - 13285.935: 88.7316% ( 61) 00:11:25.813 13285.935 - 13345.513: 89.2279% ( 54) 00:11:25.813 13345.513 - 13405.091: 89.6875% ( 50) 00:11:25.813 13405.091 - 13464.669: 90.1838% ( 54) 00:11:25.813 13464.669 - 13524.247: 90.6801% ( 54) 00:11:25.813 13524.247 - 13583.825: 91.1581% ( 52) 00:11:25.813 13583.825 - 13643.404: 91.6452% ( 53) 00:11:25.813 13643.404 - 13702.982: 92.1232% ( 52) 00:11:25.813 13702.982 - 13762.560: 92.5643% ( 48) 00:11:25.813 13762.560 - 13822.138: 92.9779% ( 45) 00:11:25.813 13822.138 - 13881.716: 93.3272% ( 38) 00:11:25.813 13881.716 - 13941.295: 93.6673% ( 37) 00:11:25.813 13941.295 - 14000.873: 93.9706% ( 33) 00:11:25.813 14000.873 - 14060.451: 94.3015% ( 36) 00:11:25.813 14060.451 - 14120.029: 94.6140% ( 34) 00:11:25.813 14120.029 - 14179.607: 94.9449% ( 36) 00:11:25.813 14179.607 - 14239.185: 95.1746% ( 25) 00:11:25.813 14239.185 - 14298.764: 95.3585% ( 20) 00:11:25.813 14298.764 - 14358.342: 95.5515% ( 21) 00:11:25.813 14358.342 - 14417.920: 95.7261% ( 19) 00:11:25.813 14417.920 - 14477.498: 95.9099% ( 20) 00:11:25.813 14477.498 - 14537.076: 96.0938% ( 20) 00:11:25.813 14537.076 - 14596.655: 96.2592% ( 18) 00:11:25.813 14596.655 - 14656.233: 96.4338% ( 19) 00:11:25.813 14656.233 - 14715.811: 96.5441% ( 12) 00:11:25.813 14715.811 - 14775.389: 96.6728% ( 14) 00:11:25.813 14775.389 - 14834.967: 96.7923% ( 13) 00:11:25.813 14834.967 - 14894.545: 96.9026% ( 12) 00:11:25.813 14894.545 - 14954.124: 96.9853% ( 9) 00:11:25.813 14954.124 - 15013.702: 97.1048% ( 13) 00:11:25.813 15013.702 - 15073.280: 97.1967% ( 10) 00:11:25.813 15073.280 - 15132.858: 97.3070% ( 12) 00:11:25.813 15132.858 - 15192.436: 97.4081% ( 11) 00:11:25.813 15192.436 - 15252.015: 97.4816% ( 8) 00:11:25.813 15252.015 - 15371.171: 97.6379% ( 17) 00:11:25.813 15371.171 - 15490.327: 97.7941% ( 17) 00:11:25.813 15490.327 - 15609.484: 97.9688% ( 19) 00:11:25.813 15609.484 - 15728.640: 98.1342% ( 18) 00:11:25.813 15728.640 - 15847.796: 98.3088% ( 19) 00:11:25.813 15847.796 - 15966.953: 98.4651% ( 17) 00:11:25.813 15966.953 - 16086.109: 98.6029% ( 15) 00:11:25.813 16086.109 - 16205.265: 98.7040% ( 11) 00:11:25.813 16205.265 - 16324.422: 98.8143% ( 12) 00:11:25.813 16324.422 - 16443.578: 98.8235% ( 1) 00:11:25.813 28001.745 - 28120.902: 98.8419% ( 2) 00:11:25.813 28120.902 - 28240.058: 98.8695% ( 3) 00:11:25.813 28240.058 - 28359.215: 98.9062% ( 4) 00:11:25.813 28359.215 - 28478.371: 98.9246% ( 2) 00:11:25.813 28478.371 - 28597.527: 98.9522% ( 3) 00:11:25.813 28597.527 - 28716.684: 98.9798% ( 3) 00:11:25.813 28716.684 - 28835.840: 99.0074% ( 3) 00:11:25.813 28835.840 - 28954.996: 99.0349% ( 3) 00:11:25.813 28954.996 - 29074.153: 99.0625% ( 3) 00:11:25.813 29074.153 - 29193.309: 99.0901% ( 3) 00:11:25.813 29193.309 - 29312.465: 99.1268% ( 4) 00:11:25.813 29312.465 - 29431.622: 99.1544% ( 3) 00:11:25.813 29431.622 - 29550.778: 99.1820% ( 3) 00:11:25.813 29550.778 - 29669.935: 99.2188% ( 4) 00:11:25.813 29669.935 - 29789.091: 99.2463% ( 3) 00:11:25.813 29789.091 - 29908.247: 99.2739% ( 3) 00:11:25.813 29908.247 - 30027.404: 99.2923% ( 2) 00:11:25.813 30027.404 - 30146.560: 99.3199% ( 3) 00:11:25.813 30146.560 - 30265.716: 99.3474% ( 3) 00:11:25.813 30265.716 - 30384.873: 99.3842% ( 4) 00:11:25.813 30384.873 - 30504.029: 99.4118% ( 3) 00:11:25.813 30504.029 - 30742.342: 99.4669% ( 6) 00:11:25.813 30742.342 - 30980.655: 99.5312% ( 7) 00:11:25.813 30980.655 - 31218.967: 99.5956% ( 7) 00:11:25.813 31218.967 - 31457.280: 99.6507% ( 6) 00:11:25.813 31457.280 - 31695.593: 99.7151% ( 7) 00:11:25.813 31695.593 - 31933.905: 99.7702% ( 6) 00:11:25.813 31933.905 - 32172.218: 99.8254% ( 6) 00:11:25.813 32172.218 - 32410.531: 99.8805% ( 6) 00:11:25.813 32410.531 - 32648.844: 99.9449% ( 7) 00:11:25.813 32648.844 - 32887.156: 100.0000% ( 6) 00:11:25.813 00:11:25.813 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:11:25.813 ============================================================================== 00:11:25.813 Range in us Cumulative IO count 00:11:25.813 8340.945 - 8400.524: 0.0184% ( 2) 00:11:25.813 8460.102 - 8519.680: 0.0827% ( 7) 00:11:25.813 8519.680 - 8579.258: 0.1471% ( 7) 00:11:25.813 8579.258 - 8638.836: 0.2022% ( 6) 00:11:25.813 8638.836 - 8698.415: 0.2665% ( 7) 00:11:25.813 8698.415 - 8757.993: 0.4963% ( 25) 00:11:25.813 8757.993 - 8817.571: 0.7169% ( 24) 00:11:25.813 8817.571 - 8877.149: 1.0202% ( 33) 00:11:25.813 8877.149 - 8936.727: 1.3511% ( 36) 00:11:25.813 8936.727 - 8996.305: 1.7188% ( 40) 00:11:25.813 8996.305 - 9055.884: 2.1415% ( 46) 00:11:25.813 9055.884 - 9115.462: 2.6746% ( 58) 00:11:25.813 9115.462 - 9175.040: 3.1985% ( 57) 00:11:25.813 9175.040 - 9234.618: 3.7592% ( 61) 00:11:25.813 9234.618 - 9294.196: 4.2096% ( 49) 00:11:25.813 9294.196 - 9353.775: 4.6507% ( 48) 00:11:25.813 9353.775 - 9413.353: 5.0735% ( 46) 00:11:25.813 9413.353 - 9472.931: 5.5055% ( 47) 00:11:25.813 9472.931 - 9532.509: 5.9651% ( 50) 00:11:25.813 9532.509 - 9592.087: 6.4154% ( 49) 00:11:25.813 9592.087 - 9651.665: 6.8750% ( 50) 00:11:25.813 9651.665 - 9711.244: 7.4265% ( 60) 00:11:25.813 9711.244 - 9770.822: 8.1618% ( 80) 00:11:25.813 9770.822 - 9830.400: 8.8603% ( 76) 00:11:25.813 9830.400 - 9889.978: 9.5404% ( 74) 00:11:25.813 9889.978 - 9949.556: 10.3125% ( 84) 00:11:25.813 9949.556 - 10009.135: 11.2316% ( 100) 00:11:25.813 10009.135 - 10068.713: 12.1967% ( 105) 00:11:25.813 10068.713 - 10128.291: 13.1893% ( 108) 00:11:25.813 10128.291 - 10187.869: 14.2463% ( 115) 00:11:25.813 10187.869 - 10247.447: 15.3860% ( 124) 00:11:25.813 10247.447 - 10307.025: 16.7371% ( 147) 00:11:25.813 10307.025 - 10366.604: 18.0699% ( 145) 00:11:25.813 10366.604 - 10426.182: 19.4853% ( 154) 00:11:25.813 10426.182 - 10485.760: 20.9926% ( 164) 00:11:25.813 10485.760 - 10545.338: 22.4449% ( 158) 00:11:25.813 10545.338 - 10604.916: 23.8511% ( 153) 00:11:25.813 10604.916 - 10664.495: 25.3860% ( 167) 00:11:25.813 10664.495 - 10724.073: 27.0772% ( 184) 00:11:25.813 10724.073 - 10783.651: 28.8695% ( 195) 00:11:25.813 10783.651 - 10843.229: 30.6526% ( 194) 00:11:25.813 10843.229 - 10902.807: 32.3529% ( 185) 00:11:25.813 10902.807 - 10962.385: 34.2096% ( 202) 00:11:25.813 10962.385 - 11021.964: 35.9651% ( 191) 00:11:25.813 11021.964 - 11081.542: 37.7665% ( 196) 00:11:25.813 11081.542 - 11141.120: 39.6599% ( 206) 00:11:25.814 11141.120 - 11200.698: 41.4522% ( 195) 00:11:25.814 11200.698 - 11260.276: 43.3364% ( 205) 00:11:25.814 11260.276 - 11319.855: 45.0551% ( 187) 00:11:25.814 11319.855 - 11379.433: 46.7188% ( 181) 00:11:25.814 11379.433 - 11439.011: 48.4099% ( 184) 00:11:25.814 11439.011 - 11498.589: 50.0184% ( 175) 00:11:25.814 11498.589 - 11558.167: 51.7647% ( 190) 00:11:25.814 11558.167 - 11617.745: 53.5662% ( 196) 00:11:25.814 11617.745 - 11677.324: 55.2206% ( 180) 00:11:25.814 11677.324 - 11736.902: 57.0037% ( 194) 00:11:25.814 11736.902 - 11796.480: 58.7868% ( 194) 00:11:25.814 11796.480 - 11856.058: 60.4688% ( 183) 00:11:25.814 11856.058 - 11915.636: 62.1691% ( 185) 00:11:25.814 11915.636 - 11975.215: 63.8419% ( 182) 00:11:25.814 11975.215 - 12034.793: 65.4596% ( 176) 00:11:25.814 12034.793 - 12094.371: 67.1232% ( 181) 00:11:25.814 12094.371 - 12153.949: 68.8603% ( 189) 00:11:25.814 12153.949 - 12213.527: 70.5790% ( 187) 00:11:25.814 12213.527 - 12273.105: 72.1691% ( 173) 00:11:25.814 12273.105 - 12332.684: 73.7684% ( 174) 00:11:25.814 12332.684 - 12392.262: 75.1930% ( 155) 00:11:25.814 12392.262 - 12451.840: 76.6360% ( 157) 00:11:25.814 12451.840 - 12511.418: 78.0515% ( 154) 00:11:25.814 12511.418 - 12570.996: 79.4669% ( 154) 00:11:25.814 12570.996 - 12630.575: 80.7629% ( 141) 00:11:25.814 12630.575 - 12690.153: 81.9669% ( 131) 00:11:25.814 12690.153 - 12749.731: 82.9871% ( 111) 00:11:25.814 12749.731 - 12809.309: 83.9430% ( 104) 00:11:25.814 12809.309 - 12868.887: 84.8529% ( 99) 00:11:25.814 12868.887 - 12928.465: 85.6618% ( 88) 00:11:25.814 12928.465 - 12988.044: 86.4338% ( 84) 00:11:25.814 12988.044 - 13047.622: 87.1691% ( 80) 00:11:25.814 13047.622 - 13107.200: 87.7757% ( 66) 00:11:25.814 13107.200 - 13166.778: 88.3456% ( 62) 00:11:25.814 13166.778 - 13226.356: 88.9062% ( 61) 00:11:25.814 13226.356 - 13285.935: 89.3842% ( 52) 00:11:25.814 13285.935 - 13345.513: 89.8346% ( 49) 00:11:25.814 13345.513 - 13405.091: 90.2482% ( 45) 00:11:25.814 13405.091 - 13464.669: 90.6526% ( 44) 00:11:25.814 13464.669 - 13524.247: 91.0202% ( 40) 00:11:25.814 13524.247 - 13583.825: 91.3603% ( 37) 00:11:25.814 13583.825 - 13643.404: 91.7831% ( 46) 00:11:25.814 13643.404 - 13702.982: 92.1875% ( 44) 00:11:25.814 13702.982 - 13762.560: 92.5735% ( 42) 00:11:25.814 13762.560 - 13822.138: 92.9044% ( 36) 00:11:25.814 13822.138 - 13881.716: 93.2353% ( 36) 00:11:25.814 13881.716 - 13941.295: 93.5202% ( 31) 00:11:25.814 13941.295 - 14000.873: 93.7500% ( 25) 00:11:25.814 14000.873 - 14060.451: 94.0257% ( 30) 00:11:25.814 14060.451 - 14120.029: 94.2739% ( 27) 00:11:25.814 14120.029 - 14179.607: 94.5496% ( 30) 00:11:25.814 14179.607 - 14239.185: 94.8346% ( 31) 00:11:25.814 14239.185 - 14298.764: 95.0368% ( 22) 00:11:25.814 14298.764 - 14358.342: 95.2849% ( 27) 00:11:25.814 14358.342 - 14417.920: 95.5147% ( 25) 00:11:25.814 14417.920 - 14477.498: 95.6985% ( 20) 00:11:25.814 14477.498 - 14537.076: 95.8456% ( 16) 00:11:25.814 14537.076 - 14596.655: 95.9743% ( 14) 00:11:25.814 14596.655 - 14656.233: 96.0938% ( 13) 00:11:25.814 14656.233 - 14715.811: 96.2132% ( 13) 00:11:25.814 14715.811 - 14775.389: 96.3419% ( 14) 00:11:25.814 14775.389 - 14834.967: 96.4706% ( 14) 00:11:25.814 14834.967 - 14894.545: 96.6085% ( 15) 00:11:25.814 14894.545 - 14954.124: 96.7188% ( 12) 00:11:25.814 14954.124 - 15013.702: 96.8474% ( 14) 00:11:25.814 15013.702 - 15073.280: 96.9669% ( 13) 00:11:25.814 15073.280 - 15132.858: 97.0956% ( 14) 00:11:25.814 15132.858 - 15192.436: 97.2151% ( 13) 00:11:25.814 15192.436 - 15252.015: 97.3070% ( 10) 00:11:25.814 15252.015 - 15371.171: 97.4908% ( 20) 00:11:25.814 15371.171 - 15490.327: 97.6471% ( 17) 00:11:25.814 15490.327 - 15609.484: 97.7941% ( 16) 00:11:25.814 15609.484 - 15728.640: 97.9596% ( 18) 00:11:25.814 15728.640 - 15847.796: 98.1158% ( 17) 00:11:25.814 15847.796 - 15966.953: 98.2629% ( 16) 00:11:25.814 15966.953 - 16086.109: 98.4191% ( 17) 00:11:25.814 16086.109 - 16205.265: 98.5662% ( 16) 00:11:25.814 16205.265 - 16324.422: 98.7040% ( 15) 00:11:25.814 16324.422 - 16443.578: 98.8143% ( 12) 00:11:25.814 16443.578 - 16562.735: 98.8235% ( 1) 00:11:25.814 26810.182 - 26929.338: 98.8603% ( 4) 00:11:25.814 26929.338 - 27048.495: 98.8879% ( 3) 00:11:25.814 27048.495 - 27167.651: 98.9154% ( 3) 00:11:25.814 27167.651 - 27286.807: 98.9430% ( 3) 00:11:25.814 27286.807 - 27405.964: 98.9706% ( 3) 00:11:25.814 27405.964 - 27525.120: 99.0074% ( 4) 00:11:25.814 27525.120 - 27644.276: 99.0349% ( 3) 00:11:25.814 27644.276 - 27763.433: 99.0625% ( 3) 00:11:25.814 27763.433 - 27882.589: 99.0901% ( 3) 00:11:25.814 27882.589 - 28001.745: 99.1176% ( 3) 00:11:25.814 28001.745 - 28120.902: 99.1544% ( 4) 00:11:25.814 28120.902 - 28240.058: 99.1820% ( 3) 00:11:25.814 28240.058 - 28359.215: 99.2096% ( 3) 00:11:25.814 28359.215 - 28478.371: 99.2371% ( 3) 00:11:25.814 28478.371 - 28597.527: 99.2647% ( 3) 00:11:25.814 28597.527 - 28716.684: 99.3015% ( 4) 00:11:25.814 28716.684 - 28835.840: 99.3290% ( 3) 00:11:25.814 28835.840 - 28954.996: 99.3566% ( 3) 00:11:25.814 28954.996 - 29074.153: 99.3842% ( 3) 00:11:25.814 29074.153 - 29193.309: 99.4118% ( 3) 00:11:25.814 29193.309 - 29312.465: 99.4393% ( 3) 00:11:25.814 29312.465 - 29431.622: 99.4669% ( 3) 00:11:25.814 29431.622 - 29550.778: 99.4853% ( 2) 00:11:25.814 29550.778 - 29669.935: 99.5221% ( 4) 00:11:25.814 29669.935 - 29789.091: 99.5496% ( 3) 00:11:25.814 29789.091 - 29908.247: 99.5772% ( 3) 00:11:25.814 29908.247 - 30027.404: 99.6048% ( 3) 00:11:25.814 30027.404 - 30146.560: 99.6324% ( 3) 00:11:25.814 30146.560 - 30265.716: 99.6691% ( 4) 00:11:25.814 30265.716 - 30384.873: 99.6967% ( 3) 00:11:25.814 30384.873 - 30504.029: 99.7243% ( 3) 00:11:25.814 30504.029 - 30742.342: 99.7794% ( 6) 00:11:25.814 30742.342 - 30980.655: 99.8346% ( 6) 00:11:25.814 30980.655 - 31218.967: 99.8989% ( 7) 00:11:25.814 31218.967 - 31457.280: 99.9540% ( 6) 00:11:25.814 31457.280 - 31695.593: 100.0000% ( 5) 00:11:25.814 00:11:25.814 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:11:25.814 ============================================================================== 00:11:25.814 Range in us Cumulative IO count 00:11:25.814 8340.945 - 8400.524: 0.0092% ( 1) 00:11:25.814 8519.680 - 8579.258: 0.1011% ( 10) 00:11:25.814 8579.258 - 8638.836: 0.2482% ( 16) 00:11:25.814 8638.836 - 8698.415: 0.3676% ( 13) 00:11:25.814 8698.415 - 8757.993: 0.5331% ( 18) 00:11:25.814 8757.993 - 8817.571: 0.8732% ( 37) 00:11:25.814 8817.571 - 8877.149: 1.2408% ( 40) 00:11:25.814 8877.149 - 8936.727: 1.6176% ( 41) 00:11:25.814 8936.727 - 8996.305: 1.9393% ( 35) 00:11:25.814 8996.305 - 9055.884: 2.3438% ( 44) 00:11:25.814 9055.884 - 9115.462: 2.6838% ( 37) 00:11:25.814 9115.462 - 9175.040: 3.1342% ( 49) 00:11:25.814 9175.040 - 9234.618: 3.6121% ( 52) 00:11:25.814 9234.618 - 9294.196: 4.0441% ( 47) 00:11:25.814 9294.196 - 9353.775: 4.5404% ( 54) 00:11:25.814 9353.775 - 9413.353: 4.9816% ( 48) 00:11:25.814 9413.353 - 9472.931: 5.5423% ( 61) 00:11:25.814 9472.931 - 9532.509: 6.0662% ( 57) 00:11:25.814 9532.509 - 9592.087: 6.4798% ( 45) 00:11:25.814 9592.087 - 9651.665: 6.9118% ( 47) 00:11:25.814 9651.665 - 9711.244: 7.4081% ( 54) 00:11:25.814 9711.244 - 9770.822: 7.9412% ( 58) 00:11:25.814 9770.822 - 9830.400: 8.4926% ( 60) 00:11:25.814 9830.400 - 9889.978: 9.2279% ( 80) 00:11:25.814 9889.978 - 9949.556: 10.1562% ( 101) 00:11:25.815 9949.556 - 10009.135: 11.1857% ( 112) 00:11:25.815 10009.135 - 10068.713: 12.2518% ( 116) 00:11:25.815 10068.713 - 10128.291: 13.2353% ( 107) 00:11:25.815 10128.291 - 10187.869: 14.2831% ( 114) 00:11:25.815 10187.869 - 10247.447: 15.5055% ( 133) 00:11:25.815 10247.447 - 10307.025: 16.7096% ( 131) 00:11:25.815 10307.025 - 10366.604: 17.8125% ( 120) 00:11:25.815 10366.604 - 10426.182: 19.2923% ( 161) 00:11:25.815 10426.182 - 10485.760: 20.6985% ( 153) 00:11:25.815 10485.760 - 10545.338: 22.3070% ( 175) 00:11:25.815 10545.338 - 10604.916: 23.9982% ( 184) 00:11:25.815 10604.916 - 10664.495: 25.6710% ( 182) 00:11:25.815 10664.495 - 10724.073: 27.2978% ( 177) 00:11:25.815 10724.073 - 10783.651: 29.1452% ( 201) 00:11:25.815 10783.651 - 10843.229: 30.8824% ( 189) 00:11:25.815 10843.229 - 10902.807: 32.6562% ( 193) 00:11:25.815 10902.807 - 10962.385: 34.4210% ( 192) 00:11:25.815 10962.385 - 11021.964: 36.2500% ( 199) 00:11:25.815 11021.964 - 11081.542: 38.0607% ( 197) 00:11:25.815 11081.542 - 11141.120: 39.9449% ( 205) 00:11:25.815 11141.120 - 11200.698: 41.8015% ( 202) 00:11:25.815 11200.698 - 11260.276: 43.7224% ( 209) 00:11:25.815 11260.276 - 11319.855: 45.5699% ( 201) 00:11:25.815 11319.855 - 11379.433: 47.3805% ( 197) 00:11:25.815 11379.433 - 11439.011: 49.0257% ( 179) 00:11:25.815 11439.011 - 11498.589: 50.7261% ( 185) 00:11:25.815 11498.589 - 11558.167: 52.2794% ( 169) 00:11:25.815 11558.167 - 11617.745: 53.9522% ( 182) 00:11:25.815 11617.745 - 11677.324: 55.6801% ( 188) 00:11:25.815 11677.324 - 11736.902: 57.3621% ( 183) 00:11:25.815 11736.902 - 11796.480: 59.0165% ( 180) 00:11:25.815 11796.480 - 11856.058: 60.7904% ( 193) 00:11:25.815 11856.058 - 11915.636: 62.4540% ( 181) 00:11:25.815 11915.636 - 11975.215: 64.1912% ( 189) 00:11:25.815 11975.215 - 12034.793: 65.8915% ( 185) 00:11:25.815 12034.793 - 12094.371: 67.5643% ( 182) 00:11:25.815 12094.371 - 12153.949: 69.2004% ( 178) 00:11:25.815 12153.949 - 12213.527: 70.8824% ( 183) 00:11:25.815 12213.527 - 12273.105: 72.5092% ( 177) 00:11:25.815 12273.105 - 12332.684: 74.0901% ( 172) 00:11:25.815 12332.684 - 12392.262: 75.6434% ( 169) 00:11:25.815 12392.262 - 12451.840: 77.1783% ( 167) 00:11:25.815 12451.840 - 12511.418: 78.6305% ( 158) 00:11:25.815 12511.418 - 12570.996: 79.9908% ( 148) 00:11:25.815 12570.996 - 12630.575: 81.2132% ( 133) 00:11:25.815 12630.575 - 12690.153: 82.3713% ( 126) 00:11:25.815 12690.153 - 12749.731: 83.4099% ( 113) 00:11:25.815 12749.731 - 12809.309: 84.3015% ( 97) 00:11:25.815 12809.309 - 12868.887: 85.0551% ( 82) 00:11:25.815 12868.887 - 12928.465: 85.7996% ( 81) 00:11:25.815 12928.465 - 12988.044: 86.4982% ( 76) 00:11:25.815 12988.044 - 13047.622: 87.1875% ( 75) 00:11:25.815 13047.622 - 13107.200: 87.7849% ( 65) 00:11:25.815 13107.200 - 13166.778: 88.3364% ( 60) 00:11:25.815 13166.778 - 13226.356: 88.7408% ( 44) 00:11:25.815 13226.356 - 13285.935: 89.1452% ( 44) 00:11:25.815 13285.935 - 13345.513: 89.4853% ( 37) 00:11:25.815 13345.513 - 13405.091: 89.8070% ( 35) 00:11:25.815 13405.091 - 13464.669: 90.1287% ( 35) 00:11:25.815 13464.669 - 13524.247: 90.4136% ( 31) 00:11:25.815 13524.247 - 13583.825: 90.7077% ( 32) 00:11:25.815 13583.825 - 13643.404: 91.0294% ( 35) 00:11:25.815 13643.404 - 13702.982: 91.3143% ( 31) 00:11:25.815 13702.982 - 13762.560: 91.6544% ( 37) 00:11:25.815 13762.560 - 13822.138: 92.0037% ( 38) 00:11:25.815 13822.138 - 13881.716: 92.3621% ( 39) 00:11:25.815 13881.716 - 13941.295: 92.7941% ( 47) 00:11:25.815 13941.295 - 14000.873: 93.2169% ( 46) 00:11:25.815 14000.873 - 14060.451: 93.5754% ( 39) 00:11:25.815 14060.451 - 14120.029: 93.9522% ( 41) 00:11:25.815 14120.029 - 14179.607: 94.2463% ( 32) 00:11:25.815 14179.607 - 14239.185: 94.5496% ( 33) 00:11:25.815 14239.185 - 14298.764: 94.7978% ( 27) 00:11:25.815 14298.764 - 14358.342: 95.0551% ( 28) 00:11:25.815 14358.342 - 14417.920: 95.2941% ( 26) 00:11:25.815 14417.920 - 14477.498: 95.5515% ( 28) 00:11:25.815 14477.498 - 14537.076: 95.7353% ( 20) 00:11:25.815 14537.076 - 14596.655: 95.9191% ( 20) 00:11:25.815 14596.655 - 14656.233: 96.0846% ( 18) 00:11:25.815 14656.233 - 14715.811: 96.2132% ( 14) 00:11:25.815 14715.811 - 14775.389: 96.3327% ( 13) 00:11:25.815 14775.389 - 14834.967: 96.4522% ( 13) 00:11:25.815 14834.967 - 14894.545: 96.5717% ( 13) 00:11:25.815 14894.545 - 14954.124: 96.6820% ( 12) 00:11:25.815 14954.124 - 15013.702: 96.7923% ( 12) 00:11:25.815 15013.702 - 15073.280: 96.8934% ( 11) 00:11:25.815 15073.280 - 15132.858: 97.0129% ( 13) 00:11:25.815 15132.858 - 15192.436: 97.1048% ( 10) 00:11:25.815 15192.436 - 15252.015: 97.1875% ( 9) 00:11:25.815 15252.015 - 15371.171: 97.3438% ( 17) 00:11:25.815 15371.171 - 15490.327: 97.4816% ( 15) 00:11:25.815 15490.327 - 15609.484: 97.6287% ( 16) 00:11:25.815 15609.484 - 15728.640: 97.7849% ( 17) 00:11:25.815 15728.640 - 15847.796: 97.9320% ( 16) 00:11:25.815 15847.796 - 15966.953: 98.0974% ( 18) 00:11:25.815 15966.953 - 16086.109: 98.2537% ( 17) 00:11:25.815 16086.109 - 16205.265: 98.4099% ( 17) 00:11:25.815 16205.265 - 16324.422: 98.5202% ( 12) 00:11:25.815 16324.422 - 16443.578: 98.6213% ( 11) 00:11:25.815 16443.578 - 16562.735: 98.7224% ( 11) 00:11:25.815 16562.735 - 16681.891: 98.7868% ( 7) 00:11:25.815 16681.891 - 16801.047: 98.8235% ( 4) 00:11:25.815 25618.618 - 25737.775: 98.8787% ( 6) 00:11:25.815 25737.775 - 25856.931: 98.9062% ( 3) 00:11:25.815 25856.931 - 25976.087: 98.9338% ( 3) 00:11:25.815 25976.087 - 26095.244: 98.9614% ( 3) 00:11:25.815 26095.244 - 26214.400: 98.9982% ( 4) 00:11:25.815 26214.400 - 26333.556: 99.0257% ( 3) 00:11:25.815 26333.556 - 26452.713: 99.0533% ( 3) 00:11:25.815 26452.713 - 26571.869: 99.0901% ( 4) 00:11:25.815 26571.869 - 26691.025: 99.1176% ( 3) 00:11:25.815 26691.025 - 26810.182: 99.1544% ( 4) 00:11:25.815 26810.182 - 26929.338: 99.1820% ( 3) 00:11:25.815 26929.338 - 27048.495: 99.2004% ( 2) 00:11:25.815 27048.495 - 27167.651: 99.2279% ( 3) 00:11:25.815 27167.651 - 27286.807: 99.2647% ( 4) 00:11:25.815 27286.807 - 27405.964: 99.2923% ( 3) 00:11:25.815 27405.964 - 27525.120: 99.3199% ( 3) 00:11:25.815 27525.120 - 27644.276: 99.3566% ( 4) 00:11:25.815 27644.276 - 27763.433: 99.3842% ( 3) 00:11:25.815 27763.433 - 27882.589: 99.4118% ( 3) 00:11:25.815 27882.589 - 28001.745: 99.4485% ( 4) 00:11:25.815 28001.745 - 28120.902: 99.4761% ( 3) 00:11:25.815 28120.902 - 28240.058: 99.5037% ( 3) 00:11:25.815 28240.058 - 28359.215: 99.5404% ( 4) 00:11:25.815 28359.215 - 28478.371: 99.5588% ( 2) 00:11:25.815 28478.371 - 28597.527: 99.5956% ( 4) 00:11:25.815 28597.527 - 28716.684: 99.6232% ( 3) 00:11:25.815 28716.684 - 28835.840: 99.6599% ( 4) 00:11:25.815 28835.840 - 28954.996: 99.6875% ( 3) 00:11:25.815 28954.996 - 29074.153: 99.7243% ( 4) 00:11:25.815 29074.153 - 29193.309: 99.7518% ( 3) 00:11:25.815 29193.309 - 29312.465: 99.7794% ( 3) 00:11:25.815 29312.465 - 29431.622: 99.8162% ( 4) 00:11:25.815 29431.622 - 29550.778: 99.8438% ( 3) 00:11:25.815 29550.778 - 29669.935: 99.8713% ( 3) 00:11:25.815 29669.935 - 29789.091: 99.9081% ( 4) 00:11:25.815 29789.091 - 29908.247: 99.9265% ( 2) 00:11:25.815 29908.247 - 30027.404: 99.9540% ( 3) 00:11:25.815 30027.404 - 30146.560: 99.9908% ( 4) 00:11:25.815 30146.560 - 30265.716: 100.0000% ( 1) 00:11:25.815 00:11:25.815 15:36:46 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:25.815 00:11:25.815 real 0m2.922s 00:11:25.815 user 0m2.487s 00:11:25.815 sys 0m0.322s 00:11:25.815 15:36:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.815 15:36:46 -- common/autotest_common.sh@10 -- # set +x 00:11:25.815 ************************************ 00:11:25.815 END TEST nvme_perf 00:11:25.815 ************************************ 00:11:25.815 15:36:46 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:25.815 15:36:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:25.815 15:36:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:25.815 15:36:46 -- common/autotest_common.sh@10 -- # set +x 00:11:25.815 ************************************ 00:11:25.815 START TEST nvme_hello_world 00:11:25.815 ************************************ 00:11:25.815 15:36:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:25.815 Initializing NVMe Controllers 00:11:25.815 Attached to 0000:00:06.0 00:11:25.815 Namespace ID: 1 size: 6GB 00:11:25.815 Attached to 0000:00:07.0 00:11:25.815 Namespace ID: 1 size: 5GB 00:11:25.815 Attached to 0000:00:09.0 00:11:25.815 Namespace ID: 1 size: 1GB 00:11:25.815 Attached to 0000:00:08.0 00:11:25.815 Namespace ID: 1 size: 4GB 00:11:25.815 Namespace ID: 2 size: 4GB 00:11:25.815 Namespace ID: 3 size: 4GB 00:11:25.815 Initialization complete. 00:11:25.815 INFO: using host memory buffer for IO 00:11:25.816 Hello world! 00:11:25.816 INFO: using host memory buffer for IO 00:11:25.816 Hello world! 00:11:25.816 INFO: using host memory buffer for IO 00:11:25.816 Hello world! 00:11:25.816 INFO: using host memory buffer for IO 00:11:25.816 Hello world! 00:11:25.816 INFO: using host memory buffer for IO 00:11:25.816 Hello world! 00:11:25.816 INFO: using host memory buffer for IO 00:11:25.816 Hello world! 00:11:25.816 00:11:25.816 real 0m0.389s 00:11:25.816 user 0m0.206s 00:11:25.816 sys 0m0.136s 00:11:25.816 15:36:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.816 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:11:25.816 ************************************ 00:11:25.816 END TEST nvme_hello_world 00:11:25.816 ************************************ 00:11:26.074 15:36:47 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:26.074 15:36:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:26.074 15:36:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:26.074 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:11:26.074 ************************************ 00:11:26.074 START TEST nvme_sgl 00:11:26.074 ************************************ 00:11:26.074 15:36:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:26.332 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:11:26.332 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:11:26.332 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:11:26.332 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:11:26.332 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:11:26.332 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:11:26.332 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:11:26.332 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:11:26.332 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:11:26.332 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:11:26.332 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:11:26.591 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:11:26.591 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:11:26.591 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:11:26.591 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:11:26.591 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:11:26.591 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:11:26.591 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:11:26.591 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:11:26.591 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:11:26.591 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:11:26.591 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:11:26.591 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:11:26.591 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:11:26.591 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:11:26.591 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:11:26.591 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:11:26.591 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:11:26.591 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:11:26.591 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:11:26.591 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:11:26.591 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:11:26.591 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:11:26.591 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:11:26.591 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:11:26.591 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:11:26.591 NVMe Readv/Writev Request test 00:11:26.591 Attached to 0000:00:06.0 00:11:26.591 Attached to 0000:00:07.0 00:11:26.591 Attached to 0000:00:09.0 00:11:26.591 Attached to 0000:00:08.0 00:11:26.591 0000:00:06.0: build_io_request_2 test passed 00:11:26.591 0000:00:06.0: build_io_request_4 test passed 00:11:26.591 0000:00:06.0: build_io_request_5 test passed 00:11:26.591 0000:00:06.0: build_io_request_6 test passed 00:11:26.591 0000:00:06.0: build_io_request_7 test passed 00:11:26.591 0000:00:06.0: build_io_request_10 test passed 00:11:26.591 0000:00:07.0: build_io_request_2 test passed 00:11:26.591 0000:00:07.0: build_io_request_4 test passed 00:11:26.591 0000:00:07.0: build_io_request_5 test passed 00:11:26.591 0000:00:07.0: build_io_request_6 test passed 00:11:26.591 0000:00:07.0: build_io_request_7 test passed 00:11:26.591 0000:00:07.0: build_io_request_10 test passed 00:11:26.591 Cleaning up... 00:11:26.591 00:11:26.591 real 0m0.551s 00:11:26.591 user 0m0.362s 00:11:26.591 sys 0m0.140s 00:11:26.591 15:36:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.591 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:11:26.591 ************************************ 00:11:26.591 END TEST nvme_sgl 00:11:26.591 ************************************ 00:11:26.591 15:36:48 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:26.591 15:36:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:26.591 15:36:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:26.591 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:11:26.591 ************************************ 00:11:26.591 START TEST nvme_e2edp 00:11:26.591 ************************************ 00:11:26.591 15:36:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:26.849 NVMe Write/Read with End-to-End data protection test 00:11:26.849 Attached to 0000:00:06.0 00:11:26.849 Attached to 0000:00:07.0 00:11:26.849 Attached to 0000:00:09.0 00:11:26.849 Attached to 0000:00:08.0 00:11:26.849 Cleaning up... 00:11:26.849 00:11:26.849 real 0m0.245s 00:11:26.849 user 0m0.088s 00:11:26.849 sys 0m0.119s 00:11:26.849 15:36:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.849 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:11:26.849 ************************************ 00:11:26.849 END TEST nvme_e2edp 00:11:26.849 ************************************ 00:11:26.849 15:36:48 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:26.849 15:36:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:26.849 15:36:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:26.849 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:11:26.849 ************************************ 00:11:26.849 START TEST nvme_reserve 00:11:26.849 ************************************ 00:11:26.849 15:36:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:27.107 ===================================================== 00:11:27.107 NVMe Controller at PCI bus 0, device 6, function 0 00:11:27.107 ===================================================== 00:11:27.107 Reservations: Not Supported 00:11:27.107 ===================================================== 00:11:27.107 NVMe Controller at PCI bus 0, device 7, function 0 00:11:27.107 ===================================================== 00:11:27.107 Reservations: Not Supported 00:11:27.107 ===================================================== 00:11:27.107 NVMe Controller at PCI bus 0, device 9, function 0 00:11:27.107 ===================================================== 00:11:27.107 Reservations: Not Supported 00:11:27.107 ===================================================== 00:11:27.107 NVMe Controller at PCI bus 0, device 8, function 0 00:11:27.107 ===================================================== 00:11:27.107 Reservations: Not Supported 00:11:27.107 Reservation test passed 00:11:27.107 00:11:27.107 real 0m0.302s 00:11:27.107 user 0m0.111s 00:11:27.107 sys 0m0.143s 00:11:27.107 15:36:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.107 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:11:27.107 ************************************ 00:11:27.107 END TEST nvme_reserve 00:11:27.107 ************************************ 00:11:27.107 15:36:48 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:27.107 15:36:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:27.107 15:36:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:27.107 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:11:27.107 ************************************ 00:11:27.107 START TEST nvme_err_injection 00:11:27.107 ************************************ 00:11:27.107 15:36:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:27.672 NVMe Error Injection test 00:11:27.672 Attached to 0000:00:06.0 00:11:27.672 Attached to 0000:00:07.0 00:11:27.672 Attached to 0000:00:09.0 00:11:27.672 Attached to 0000:00:08.0 00:11:27.672 0000:00:06.0: get features failed as expected 00:11:27.672 0000:00:07.0: get features failed as expected 00:11:27.672 0000:00:09.0: get features failed as expected 00:11:27.672 0000:00:08.0: get features failed as expected 00:11:27.672 0000:00:08.0: get features successfully as expected 00:11:27.672 0000:00:06.0: get features successfully as expected 00:11:27.672 0000:00:07.0: get features successfully as expected 00:11:27.672 0000:00:09.0: get features successfully as expected 00:11:27.672 0000:00:06.0: read failed as expected 00:11:27.672 0000:00:07.0: read failed as expected 00:11:27.672 0000:00:09.0: read failed as expected 00:11:27.672 0000:00:08.0: read failed as expected 00:11:27.672 0000:00:06.0: read successfully as expected 00:11:27.672 0000:00:07.0: read successfully as expected 00:11:27.672 0000:00:09.0: read successfully as expected 00:11:27.672 0000:00:08.0: read successfully as expected 00:11:27.672 Cleaning up... 00:11:27.672 ************************************ 00:11:27.672 END TEST nvme_err_injection 00:11:27.672 ************************************ 00:11:27.672 00:11:27.672 real 0m0.381s 00:11:27.672 user 0m0.198s 00:11:27.672 sys 0m0.125s 00:11:27.672 15:36:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.672 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:11:27.672 15:36:49 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:27.672 15:36:49 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:27.672 15:36:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:27.672 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:11:27.673 ************************************ 00:11:27.673 START TEST nvme_overhead 00:11:27.673 ************************************ 00:11:27.673 15:36:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:29.072 Initializing NVMe Controllers 00:11:29.072 Attached to 0000:00:06.0 00:11:29.072 Attached to 0000:00:07.0 00:11:29.072 Attached to 0000:00:09.0 00:11:29.072 Attached to 0000:00:08.0 00:11:29.072 Initialization complete. Launching workers. 00:11:29.072 submit (in ns) avg, min, max = 16380.5, 12885.5, 102914.5 00:11:29.072 complete (in ns) avg, min, max = 11256.9, 9561.4, 79896.4 00:11:29.072 00:11:29.072 Submit histogram 00:11:29.072 ================ 00:11:29.072 Range in us Cumulative Count 00:11:29.072 12.858 - 12.916: 0.0094% ( 1) 00:11:29.072 13.265 - 13.324: 0.0188% ( 1) 00:11:29.072 14.255 - 14.313: 0.0470% ( 3) 00:11:29.072 14.371 - 14.429: 0.1409% ( 10) 00:11:29.072 14.429 - 14.487: 0.5259% ( 41) 00:11:29.072 14.487 - 14.545: 2.0473% ( 162) 00:11:29.072 14.545 - 14.604: 6.0856% ( 430) 00:11:29.072 14.604 - 14.662: 13.1480% ( 752) 00:11:29.072 14.662 - 14.720: 21.9666% ( 939) 00:11:29.072 14.720 - 14.778: 30.6067% ( 920) 00:11:29.072 14.778 - 14.836: 38.7209% ( 864) 00:11:29.072 14.836 - 14.895: 44.2900% ( 593) 00:11:29.072 14.895 - 15.011: 52.0192% ( 823) 00:11:29.072 15.011 - 15.127: 55.4940% ( 370) 00:11:29.072 15.127 - 15.244: 57.1281% ( 174) 00:11:29.072 15.244 - 15.360: 58.4335% ( 139) 00:11:29.072 15.360 - 15.476: 60.3118% ( 200) 00:11:29.072 15.476 - 15.593: 62.4061% ( 223) 00:11:29.072 15.593 - 15.709: 64.0590% ( 176) 00:11:29.072 15.709 - 15.825: 65.1578% ( 117) 00:11:29.072 15.825 - 15.942: 65.8997% ( 79) 00:11:29.072 15.942 - 16.058: 66.4350% ( 57) 00:11:29.072 16.058 - 16.175: 66.9797% ( 58) 00:11:29.072 16.175 - 16.291: 67.3084% ( 35) 00:11:29.072 16.291 - 16.407: 67.6371% ( 35) 00:11:29.072 16.407 - 16.524: 67.9095% ( 29) 00:11:29.072 16.524 - 16.640: 68.1349% ( 24) 00:11:29.072 16.640 - 16.756: 68.4448% ( 33) 00:11:29.072 16.756 - 16.873: 68.6514% ( 22) 00:11:29.072 16.873 - 16.989: 68.7547% ( 11) 00:11:29.072 16.989 - 17.105: 68.8768% ( 13) 00:11:29.072 17.105 - 17.222: 69.0177% ( 15) 00:11:29.072 17.222 - 17.338: 70.4639% ( 154) 00:11:29.072 17.338 - 17.455: 75.4226% ( 528) 00:11:29.072 17.455 - 17.571: 80.5879% ( 550) 00:11:29.072 17.571 - 17.687: 83.0954% ( 267) 00:11:29.072 17.687 - 17.804: 84.8610% ( 188) 00:11:29.072 17.804 - 17.920: 85.9128% ( 112) 00:11:29.072 17.920 - 18.036: 86.7393% ( 88) 00:11:29.072 18.036 - 18.153: 87.5094% ( 82) 00:11:29.072 18.153 - 18.269: 88.2513% ( 79) 00:11:29.072 18.269 - 18.385: 88.9463% ( 74) 00:11:29.072 18.385 - 18.502: 89.6694% ( 77) 00:11:29.072 18.502 - 18.618: 90.1202% ( 48) 00:11:29.072 18.618 - 18.735: 90.3738% ( 27) 00:11:29.072 18.735 - 18.851: 90.5240% ( 16) 00:11:29.072 18.851 - 18.967: 90.7307% ( 22) 00:11:29.072 18.967 - 19.084: 90.8809% ( 16) 00:11:29.072 19.084 - 19.200: 91.0030% ( 13) 00:11:29.072 19.200 - 19.316: 91.1251% ( 13) 00:11:29.072 19.316 - 19.433: 91.2847% ( 17) 00:11:29.072 19.433 - 19.549: 91.3411% ( 6) 00:11:29.072 19.549 - 19.665: 91.4632% ( 13) 00:11:29.072 19.665 - 19.782: 91.5759% ( 12) 00:11:29.072 19.782 - 19.898: 91.6228% ( 5) 00:11:29.072 19.898 - 20.015: 91.6980% ( 8) 00:11:29.072 20.015 - 20.131: 91.7825% ( 9) 00:11:29.072 20.131 - 20.247: 91.8295% ( 5) 00:11:29.072 20.247 - 20.364: 91.9985% ( 18) 00:11:29.072 20.364 - 20.480: 92.1394% ( 15) 00:11:29.072 20.480 - 20.596: 92.3084% ( 18) 00:11:29.072 20.596 - 20.713: 92.4962% ( 20) 00:11:29.072 20.713 - 20.829: 92.6089% ( 12) 00:11:29.072 20.829 - 20.945: 92.6747% ( 7) 00:11:29.072 20.945 - 21.062: 92.7686% ( 10) 00:11:29.072 21.062 - 21.178: 92.9282% ( 17) 00:11:29.072 21.178 - 21.295: 93.0222% ( 10) 00:11:29.072 21.295 - 21.411: 93.1724% ( 16) 00:11:29.072 21.411 - 21.527: 93.2382% ( 7) 00:11:29.072 21.527 - 21.644: 93.3415% ( 11) 00:11:29.072 21.644 - 21.760: 93.4354% ( 10) 00:11:29.072 21.760 - 21.876: 93.6044% ( 18) 00:11:29.072 21.876 - 21.993: 93.7547% ( 16) 00:11:29.072 21.993 - 22.109: 93.8768% ( 13) 00:11:29.072 22.109 - 22.225: 93.9519% ( 8) 00:11:29.072 22.225 - 22.342: 94.0834% ( 14) 00:11:29.072 22.342 - 22.458: 94.2055% ( 13) 00:11:29.072 22.458 - 22.575: 94.2900% ( 9) 00:11:29.072 22.575 - 22.691: 94.4215% ( 14) 00:11:29.072 22.691 - 22.807: 94.5999% ( 19) 00:11:29.072 22.807 - 22.924: 94.6938% ( 10) 00:11:29.072 22.924 - 23.040: 94.8441% ( 16) 00:11:29.072 23.040 - 23.156: 94.9756% ( 14) 00:11:29.072 23.156 - 23.273: 95.1446% ( 18) 00:11:29.072 23.273 - 23.389: 95.2949% ( 16) 00:11:29.072 23.389 - 23.505: 95.4545% ( 17) 00:11:29.072 23.505 - 23.622: 95.6236% ( 18) 00:11:29.072 23.622 - 23.738: 95.7081% ( 9) 00:11:29.072 23.738 - 23.855: 95.8302% ( 13) 00:11:29.072 23.855 - 23.971: 95.9805% ( 16) 00:11:29.072 23.971 - 24.087: 96.1683% ( 20) 00:11:29.072 24.087 - 24.204: 96.4219% ( 27) 00:11:29.072 24.204 - 24.320: 96.5909% ( 18) 00:11:29.072 24.320 - 24.436: 96.7693% ( 19) 00:11:29.072 24.436 - 24.553: 96.9196% ( 16) 00:11:29.072 24.553 - 24.669: 97.0135% ( 10) 00:11:29.072 24.669 - 24.785: 97.1356% ( 13) 00:11:29.072 24.785 - 24.902: 97.3234% ( 20) 00:11:29.072 24.902 - 25.018: 97.4737% ( 16) 00:11:29.072 25.018 - 25.135: 97.5864% ( 12) 00:11:29.073 25.135 - 25.251: 97.7367% ( 16) 00:11:29.073 25.251 - 25.367: 97.8494% ( 12) 00:11:29.073 25.367 - 25.484: 97.9902% ( 15) 00:11:29.073 25.484 - 25.600: 98.0935% ( 11) 00:11:29.073 25.600 - 25.716: 98.2062% ( 12) 00:11:29.073 25.716 - 25.833: 98.3471% ( 15) 00:11:29.073 25.833 - 25.949: 98.4222% ( 8) 00:11:29.073 25.949 - 26.065: 98.4974% ( 8) 00:11:29.073 26.065 - 26.182: 98.5725% ( 8) 00:11:29.073 26.182 - 26.298: 98.6382% ( 7) 00:11:29.073 26.298 - 26.415: 98.7415% ( 11) 00:11:29.073 26.415 - 26.531: 98.8261% ( 9) 00:11:29.073 26.531 - 26.647: 98.8918% ( 7) 00:11:29.073 26.647 - 26.764: 98.9294% ( 4) 00:11:29.073 26.764 - 26.880: 99.0233% ( 10) 00:11:29.073 26.880 - 26.996: 99.0984% ( 8) 00:11:29.073 26.996 - 27.113: 99.1736% ( 8) 00:11:29.073 27.113 - 27.229: 99.2675% ( 10) 00:11:29.073 27.229 - 27.345: 99.3050% ( 4) 00:11:29.073 27.345 - 27.462: 99.3144% ( 1) 00:11:29.073 27.462 - 27.578: 99.3332% ( 2) 00:11:29.073 27.695 - 27.811: 99.3520% ( 2) 00:11:29.073 27.811 - 27.927: 99.3708% ( 2) 00:11:29.073 28.160 - 28.276: 99.3896% ( 2) 00:11:29.073 28.276 - 28.393: 99.3989% ( 1) 00:11:29.073 28.393 - 28.509: 99.4177% ( 2) 00:11:29.073 28.509 - 28.625: 99.4365% ( 2) 00:11:29.073 28.742 - 28.858: 99.4553% ( 2) 00:11:29.073 28.858 - 28.975: 99.4835% ( 3) 00:11:29.073 28.975 - 29.091: 99.5116% ( 3) 00:11:29.073 29.440 - 29.556: 99.5210% ( 1) 00:11:29.073 29.556 - 29.673: 99.5492% ( 3) 00:11:29.073 29.673 - 29.789: 99.5586% ( 1) 00:11:29.073 29.789 - 30.022: 99.5868% ( 3) 00:11:29.073 30.022 - 30.255: 99.6056% ( 2) 00:11:29.073 30.255 - 30.487: 99.6337% ( 3) 00:11:29.073 30.487 - 30.720: 99.6525% ( 2) 00:11:29.073 30.720 - 30.953: 99.6619% ( 1) 00:11:29.073 31.418 - 31.651: 99.6713% ( 1) 00:11:29.073 31.651 - 31.884: 99.6901% ( 2) 00:11:29.073 31.884 - 32.116: 99.7089% ( 2) 00:11:29.073 32.116 - 32.349: 99.7276% ( 2) 00:11:29.073 32.349 - 32.582: 99.7464% ( 2) 00:11:29.073 32.815 - 33.047: 99.7652% ( 2) 00:11:29.073 33.047 - 33.280: 99.7746% ( 1) 00:11:29.073 33.513 - 33.745: 99.7840% ( 1) 00:11:29.073 33.978 - 34.211: 99.7934% ( 1) 00:11:29.073 34.676 - 34.909: 99.8028% ( 1) 00:11:29.073 34.909 - 35.142: 99.8122% ( 1) 00:11:29.073 35.142 - 35.375: 99.8216% ( 1) 00:11:29.073 35.375 - 35.607: 99.8310% ( 1) 00:11:29.073 36.305 - 36.538: 99.8403% ( 1) 00:11:29.073 37.236 - 37.469: 99.8497% ( 1) 00:11:29.073 37.469 - 37.702: 99.8591% ( 1) 00:11:29.073 37.935 - 38.167: 99.8685% ( 1) 00:11:29.073 39.796 - 40.029: 99.8779% ( 1) 00:11:29.073 40.029 - 40.262: 99.8873% ( 1) 00:11:29.073 40.727 - 40.960: 99.9061% ( 2) 00:11:29.073 41.425 - 41.658: 99.9155% ( 1) 00:11:29.073 43.287 - 43.520: 99.9249% ( 1) 00:11:29.073 44.684 - 44.916: 99.9343% ( 1) 00:11:29.073 46.313 - 46.545: 99.9437% ( 1) 00:11:29.073 56.087 - 56.320: 99.9530% ( 1) 00:11:29.073 57.949 - 58.182: 99.9624% ( 1) 00:11:29.073 58.647 - 58.880: 99.9718% ( 1) 00:11:29.073 58.880 - 59.113: 99.9812% ( 1) 00:11:29.073 63.302 - 63.767: 99.9906% ( 1) 00:11:29.073 102.865 - 103.331: 100.0000% ( 1) 00:11:29.073 00:11:29.073 Complete histogram 00:11:29.073 ================== 00:11:29.073 Range in us Cumulative Count 00:11:29.073 9.542 - 9.600: 0.0470% ( 5) 00:11:29.073 9.600 - 9.658: 0.3475% ( 32) 00:11:29.073 9.658 - 9.716: 2.5826% ( 238) 00:11:29.073 9.716 - 9.775: 8.5180% ( 632) 00:11:29.073 9.775 - 9.833: 19.2337% ( 1141) 00:11:29.073 9.833 - 9.891: 32.2878% ( 1390) 00:11:29.073 9.891 - 9.949: 43.3603% ( 1179) 00:11:29.073 9.949 - 10.007: 51.1551% ( 830) 00:11:29.073 10.007 - 10.065: 55.2122% ( 432) 00:11:29.073 10.065 - 10.124: 57.4192% ( 235) 00:11:29.073 10.124 - 10.182: 58.3396% ( 98) 00:11:29.073 10.182 - 10.240: 59.1285% ( 84) 00:11:29.073 10.240 - 10.298: 59.5229% ( 42) 00:11:29.073 10.298 - 10.356: 59.7953% ( 29) 00:11:29.073 10.356 - 10.415: 60.0113% ( 23) 00:11:29.073 10.415 - 10.473: 60.2367% ( 24) 00:11:29.073 10.473 - 10.531: 60.3963% ( 17) 00:11:29.073 10.531 - 10.589: 60.4715% ( 8) 00:11:29.073 10.589 - 10.647: 60.4996% ( 3) 00:11:29.073 10.647 - 10.705: 60.5560% ( 6) 00:11:29.073 10.705 - 10.764: 60.6593% ( 11) 00:11:29.073 10.764 - 10.822: 60.8002% ( 15) 00:11:29.073 10.822 - 10.880: 60.9598% ( 17) 00:11:29.073 10.880 - 10.938: 61.2791% ( 34) 00:11:29.073 10.938 - 10.996: 61.6923% ( 44) 00:11:29.073 10.996 - 11.055: 62.1337% ( 47) 00:11:29.073 11.055 - 11.113: 62.5282% ( 42) 00:11:29.073 11.113 - 11.171: 63.0165% ( 52) 00:11:29.073 11.171 - 11.229: 63.5612% ( 58) 00:11:29.073 11.229 - 11.287: 63.9087% ( 37) 00:11:29.073 11.287 - 11.345: 64.3032% ( 42) 00:11:29.073 11.345 - 11.404: 64.5379% ( 25) 00:11:29.073 11.404 - 11.462: 64.7258% ( 20) 00:11:29.073 11.462 - 11.520: 64.8760% ( 16) 00:11:29.073 11.520 - 11.578: 65.0826% ( 22) 00:11:29.073 11.578 - 11.636: 65.2611% ( 19) 00:11:29.073 11.636 - 11.695: 65.4583% ( 21) 00:11:29.073 11.695 - 11.753: 65.9279% ( 50) 00:11:29.073 11.753 - 11.811: 67.1018% ( 125) 00:11:29.073 11.811 - 11.869: 69.4403% ( 249) 00:11:29.073 11.869 - 11.927: 73.0090% ( 380) 00:11:29.073 11.927 - 11.985: 77.2539% ( 452) 00:11:29.073 11.985 - 12.044: 80.4376% ( 339) 00:11:29.073 12.044 - 12.102: 82.3441% ( 203) 00:11:29.073 12.102 - 12.160: 83.4523% ( 118) 00:11:29.073 12.160 - 12.218: 84.2318% ( 83) 00:11:29.073 12.218 - 12.276: 84.6638% ( 46) 00:11:29.073 12.276 - 12.335: 84.9831% ( 34) 00:11:29.073 12.335 - 12.393: 85.2742% ( 31) 00:11:29.073 12.393 - 12.451: 85.5935% ( 34) 00:11:29.073 12.451 - 12.509: 85.9692% ( 40) 00:11:29.073 12.509 - 12.567: 86.2979% ( 35) 00:11:29.073 12.567 - 12.625: 86.6360% ( 36) 00:11:29.073 12.625 - 12.684: 87.0774% ( 47) 00:11:29.073 12.684 - 12.742: 87.4718% ( 42) 00:11:29.073 12.742 - 12.800: 87.7724% ( 32) 00:11:29.073 12.800 - 12.858: 88.1104% ( 36) 00:11:29.073 12.858 - 12.916: 88.5049% ( 42) 00:11:29.073 12.916 - 12.975: 88.7209% ( 23) 00:11:29.073 12.975 - 13.033: 88.8993% ( 19) 00:11:29.073 13.033 - 13.091: 89.1341% ( 25) 00:11:29.073 13.091 - 13.149: 89.4534% ( 34) 00:11:29.073 13.149 - 13.207: 89.7539% ( 32) 00:11:29.073 13.207 - 13.265: 90.0451% ( 31) 00:11:29.073 13.265 - 13.324: 90.3362% ( 31) 00:11:29.073 13.324 - 13.382: 90.6367% ( 32) 00:11:29.073 13.382 - 13.440: 90.8715% ( 25) 00:11:29.073 13.440 - 13.498: 91.0124% ( 15) 00:11:29.073 13.498 - 13.556: 91.1439% ( 14) 00:11:29.073 13.556 - 13.615: 91.2660% ( 13) 00:11:29.073 13.615 - 13.673: 91.3317% ( 7) 00:11:29.073 13.673 - 13.731: 91.3881% ( 6) 00:11:29.073 13.731 - 13.789: 91.4538% ( 7) 00:11:29.073 13.789 - 13.847: 91.4914% ( 4) 00:11:29.073 13.847 - 13.905: 91.5759% ( 9) 00:11:29.073 13.905 - 13.964: 91.6510% ( 8) 00:11:29.073 13.964 - 14.022: 91.7261% ( 8) 00:11:29.073 14.022 - 14.080: 91.7731% ( 5) 00:11:29.073 14.080 - 14.138: 91.8013% ( 3) 00:11:29.073 14.138 - 14.196: 91.9046% ( 11) 00:11:29.074 14.196 - 14.255: 91.9891% ( 9) 00:11:29.074 14.255 - 14.313: 92.0361% ( 5) 00:11:29.074 14.313 - 14.371: 92.1018% ( 7) 00:11:29.074 14.371 - 14.429: 92.1394% ( 4) 00:11:29.074 14.429 - 14.487: 92.1675% ( 3) 00:11:29.074 14.487 - 14.545: 92.2239% ( 6) 00:11:29.074 14.545 - 14.604: 92.2990% ( 8) 00:11:29.074 14.604 - 14.662: 92.3648% ( 7) 00:11:29.074 14.662 - 14.720: 92.4493% ( 9) 00:11:29.074 14.720 - 14.778: 92.5150% ( 7) 00:11:29.074 14.778 - 14.836: 92.6089% ( 10) 00:11:29.074 14.836 - 14.895: 92.7780% ( 18) 00:11:29.074 14.895 - 15.011: 93.0316% ( 27) 00:11:29.074 15.011 - 15.127: 93.3884% ( 38) 00:11:29.074 15.127 - 15.244: 93.6514% ( 28) 00:11:29.074 15.244 - 15.360: 93.8017% ( 16) 00:11:29.074 15.360 - 15.476: 93.9237% ( 13) 00:11:29.074 15.476 - 15.593: 94.0834% ( 17) 00:11:29.074 15.593 - 15.709: 94.2243% ( 15) 00:11:29.074 15.709 - 15.825: 94.3557% ( 14) 00:11:29.074 15.825 - 15.942: 94.4591% ( 11) 00:11:29.074 15.942 - 16.058: 94.5624% ( 11) 00:11:29.074 16.058 - 16.175: 94.6844% ( 13) 00:11:29.074 16.175 - 16.291: 94.7502% ( 7) 00:11:29.074 16.291 - 16.407: 94.8441% ( 10) 00:11:29.074 16.407 - 16.524: 94.9662% ( 13) 00:11:29.074 16.524 - 16.640: 95.1071% ( 15) 00:11:29.074 16.640 - 16.756: 95.2198% ( 12) 00:11:29.074 16.756 - 16.873: 95.3325% ( 12) 00:11:29.074 16.873 - 16.989: 95.4921% ( 17) 00:11:29.074 16.989 - 17.105: 95.5297% ( 4) 00:11:29.074 17.105 - 17.222: 95.6330% ( 11) 00:11:29.074 17.222 - 17.338: 95.7363% ( 11) 00:11:29.074 17.338 - 17.455: 95.8772% ( 15) 00:11:29.074 17.455 - 17.571: 96.0086% ( 14) 00:11:29.074 17.571 - 17.687: 96.1026% ( 10) 00:11:29.074 17.687 - 17.804: 96.2246% ( 13) 00:11:29.074 17.804 - 17.920: 96.4219% ( 21) 00:11:29.074 17.920 - 18.036: 96.5346% ( 12) 00:11:29.074 18.036 - 18.153: 96.6566% ( 13) 00:11:29.074 18.153 - 18.269: 96.8445% ( 20) 00:11:29.074 18.269 - 18.385: 96.9478% ( 11) 00:11:29.074 18.385 - 18.502: 97.0041% ( 6) 00:11:29.074 18.502 - 18.618: 97.1450% ( 15) 00:11:29.074 18.618 - 18.735: 97.2765% ( 14) 00:11:29.074 18.735 - 18.851: 97.4267% ( 16) 00:11:29.074 18.851 - 18.967: 97.5019% ( 8) 00:11:29.074 18.967 - 19.084: 97.5864% ( 9) 00:11:29.074 19.084 - 19.200: 97.6709% ( 9) 00:11:29.074 19.200 - 19.316: 97.7648% ( 10) 00:11:29.074 19.316 - 19.433: 97.8494% ( 9) 00:11:29.074 19.433 - 19.549: 97.9245% ( 8) 00:11:29.074 19.549 - 19.665: 98.0184% ( 10) 00:11:29.074 19.665 - 19.782: 98.1217% ( 11) 00:11:29.074 19.782 - 19.898: 98.2062% ( 9) 00:11:29.074 19.898 - 20.015: 98.2908% ( 9) 00:11:29.074 20.015 - 20.131: 98.4035% ( 12) 00:11:29.074 20.131 - 20.247: 98.4974% ( 10) 00:11:29.074 20.247 - 20.364: 98.5443% ( 5) 00:11:29.074 20.364 - 20.480: 98.6101% ( 7) 00:11:29.074 20.480 - 20.596: 98.7040% ( 10) 00:11:29.074 20.596 - 20.713: 98.7415% ( 4) 00:11:29.074 20.713 - 20.829: 98.7697% ( 3) 00:11:29.074 20.829 - 20.945: 98.8542% ( 9) 00:11:29.074 20.945 - 21.062: 98.9012% ( 5) 00:11:29.074 21.062 - 21.178: 98.9294% ( 3) 00:11:29.074 21.178 - 21.295: 98.9763% ( 5) 00:11:29.074 21.295 - 21.411: 99.0421% ( 7) 00:11:29.074 21.411 - 21.527: 99.0984% ( 6) 00:11:29.074 21.644 - 21.760: 99.1266% ( 3) 00:11:29.074 21.760 - 21.876: 99.1829% ( 6) 00:11:29.074 21.876 - 21.993: 99.1923% ( 1) 00:11:29.074 21.993 - 22.109: 99.2205% ( 3) 00:11:29.074 22.109 - 22.225: 99.2769% ( 6) 00:11:29.074 22.225 - 22.342: 99.3144% ( 4) 00:11:29.074 22.342 - 22.458: 99.3614% ( 5) 00:11:29.074 22.458 - 22.575: 99.4083% ( 5) 00:11:29.074 22.575 - 22.691: 99.4365% ( 3) 00:11:29.074 22.691 - 22.807: 99.4929% ( 6) 00:11:29.074 22.807 - 22.924: 99.5304% ( 4) 00:11:29.074 22.924 - 23.040: 99.5398% ( 1) 00:11:29.074 23.040 - 23.156: 99.5492% ( 1) 00:11:29.074 23.156 - 23.273: 99.5586% ( 1) 00:11:29.074 23.273 - 23.389: 99.5680% ( 1) 00:11:29.074 23.505 - 23.622: 99.5774% ( 1) 00:11:29.074 23.622 - 23.738: 99.5868% ( 1) 00:11:29.074 23.738 - 23.855: 99.6056% ( 2) 00:11:29.074 23.855 - 23.971: 99.6243% ( 2) 00:11:29.074 23.971 - 24.087: 99.6337% ( 1) 00:11:29.074 24.087 - 24.204: 99.6431% ( 1) 00:11:29.074 24.204 - 24.320: 99.6525% ( 1) 00:11:29.074 24.320 - 24.436: 99.6619% ( 1) 00:11:29.074 24.436 - 24.553: 99.6713% ( 1) 00:11:29.074 25.135 - 25.251: 99.6807% ( 1) 00:11:29.074 25.251 - 25.367: 99.6901% ( 1) 00:11:29.074 25.367 - 25.484: 99.6995% ( 1) 00:11:29.074 25.484 - 25.600: 99.7089% ( 1) 00:11:29.074 26.065 - 26.182: 99.7370% ( 3) 00:11:29.074 26.182 - 26.298: 99.7464% ( 1) 00:11:29.074 26.647 - 26.764: 99.7558% ( 1) 00:11:29.074 26.996 - 27.113: 99.7652% ( 1) 00:11:29.074 27.229 - 27.345: 99.7746% ( 1) 00:11:29.074 27.811 - 27.927: 99.7840% ( 1) 00:11:29.074 28.742 - 28.858: 99.7934% ( 1) 00:11:29.074 31.185 - 31.418: 99.8122% ( 2) 00:11:29.074 31.884 - 32.116: 99.8310% ( 2) 00:11:29.074 32.116 - 32.349: 99.8591% ( 3) 00:11:29.074 32.582 - 32.815: 99.8685% ( 1) 00:11:29.074 33.047 - 33.280: 99.8779% ( 1) 00:11:29.074 33.280 - 33.513: 99.8873% ( 1) 00:11:29.074 33.745 - 33.978: 99.8967% ( 1) 00:11:29.074 34.676 - 34.909: 99.9061% ( 1) 00:11:29.074 35.375 - 35.607: 99.9155% ( 1) 00:11:29.074 35.607 - 35.840: 99.9343% ( 2) 00:11:29.074 36.305 - 36.538: 99.9437% ( 1) 00:11:29.074 36.538 - 36.771: 99.9530% ( 1) 00:11:29.074 39.098 - 39.331: 99.9624% ( 1) 00:11:29.074 39.564 - 39.796: 99.9718% ( 1) 00:11:29.074 46.778 - 47.011: 99.9812% ( 1) 00:11:29.074 51.433 - 51.665: 99.9906% ( 1) 00:11:29.074 79.593 - 80.058: 100.0000% ( 1) 00:11:29.074 00:11:29.074 ************************************ 00:11:29.074 00:11:29.074 real 0m1.313s 00:11:29.074 user 0m1.122s 00:11:29.074 sys 0m0.132s 00:11:29.074 15:36:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.074 15:36:50 -- common/autotest_common.sh@10 -- # set +x 00:11:29.074 END TEST nvme_overhead 00:11:29.074 ************************************ 00:11:29.074 15:36:50 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:29.074 15:36:50 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:29.074 15:36:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:29.074 15:36:50 -- common/autotest_common.sh@10 -- # set +x 00:11:29.074 ************************************ 00:11:29.074 START TEST nvme_arbitration 00:11:29.074 ************************************ 00:11:29.074 15:36:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:32.356 Initializing NVMe Controllers 00:11:32.356 Attached to 0000:00:06.0 00:11:32.356 Attached to 0000:00:07.0 00:11:32.356 Attached to 0000:00:09.0 00:11:32.356 Attached to 0000:00:08.0 00:11:32.356 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:32.356 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:11:32.356 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:11:32.356 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:11:32.356 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:11:32.356 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:11:32.356 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:32.356 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:32.357 Initialization complete. Launching workers. 00:11:32.357 Starting thread on core 1 with urgent priority queue 00:11:32.357 Starting thread on core 2 with urgent priority queue 00:11:32.357 Starting thread on core 3 with urgent priority queue 00:11:32.357 Starting thread on core 0 with urgent priority queue 00:11:32.357 QEMU NVMe Ctrl (12340 ) core 0: 618.67 IO/s 161.64 secs/100000 ios 00:11:32.357 QEMU NVMe Ctrl (12342 ) core 0: 618.67 IO/s 161.64 secs/100000 ios 00:11:32.357 QEMU NVMe Ctrl (12341 ) core 1: 704.00 IO/s 142.05 secs/100000 ios 00:11:32.357 QEMU NVMe Ctrl (12342 ) core 1: 704.00 IO/s 142.05 secs/100000 ios 00:11:32.357 QEMU NVMe Ctrl (12343 ) core 2: 704.00 IO/s 142.05 secs/100000 ios 00:11:32.357 QEMU NVMe Ctrl (12342 ) core 3: 661.33 IO/s 151.21 secs/100000 ios 00:11:32.357 ======================================================== 00:11:32.357 00:11:32.357 ************************************ 00:11:32.357 END TEST nvme_arbitration 00:11:32.357 ************************************ 00:11:32.357 00:11:32.357 real 0m3.483s 00:11:32.357 user 0m9.577s 00:11:32.357 sys 0m0.135s 00:11:32.357 15:36:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.357 15:36:53 -- common/autotest_common.sh@10 -- # set +x 00:11:32.615 15:36:53 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:11:32.615 15:36:53 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:32.615 15:36:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:32.615 15:36:53 -- common/autotest_common.sh@10 -- # set +x 00:11:32.615 ************************************ 00:11:32.615 START TEST nvme_single_aen 00:11:32.615 ************************************ 00:11:32.615 15:36:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:11:32.615 [2024-07-24 15:36:54.023466] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:32.615 [2024-07-24 15:36:54.023777] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.874 [2024-07-24 15:36:54.216414] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:32.874 [2024-07-24 15:36:54.218153] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:11:32.874 [2024-07-24 15:36:54.219513] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:11:32.874 [2024-07-24 15:36:54.220910] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:11:32.874 Asynchronous Event Request test 00:11:32.874 Attached to 0000:00:06.0 00:11:32.874 Attached to 0000:00:07.0 00:11:32.874 Attached to 0000:00:09.0 00:11:32.874 Attached to 0000:00:08.0 00:11:32.874 Reset controller to setup AER completions for this process 00:11:32.874 Registering asynchronous event callbacks... 00:11:32.874 Getting orig temperature thresholds of all controllers 00:11:32.874 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:32.874 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:32.874 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:32.874 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:32.874 Setting all controllers temperature threshold low to trigger AER 00:11:32.874 Waiting for all controllers temperature threshold to be set lower 00:11:32.874 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:32.874 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:11:32.874 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:32.874 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:11:32.874 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:32.874 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:11:32.874 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:32.874 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:11:32.874 Waiting for all controllers to trigger AER and reset threshold 00:11:32.874 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:32.874 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:32.874 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:32.874 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:32.874 Cleaning up... 00:11:32.874 00:11:32.874 real 0m0.286s 00:11:32.874 user 0m0.106s 00:11:32.874 sys 0m0.134s 00:11:32.874 15:36:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.874 15:36:54 -- common/autotest_common.sh@10 -- # set +x 00:11:32.874 ************************************ 00:11:32.874 END TEST nvme_single_aen 00:11:32.874 ************************************ 00:11:32.874 15:36:54 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:32.874 15:36:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:32.874 15:36:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:32.874 15:36:54 -- common/autotest_common.sh@10 -- # set +x 00:11:32.874 ************************************ 00:11:32.874 START TEST nvme_doorbell_aers 00:11:32.874 ************************************ 00:11:32.874 15:36:54 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:11:32.874 15:36:54 -- nvme/nvme.sh@70 -- # bdfs=() 00:11:32.874 15:36:54 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:32.874 15:36:54 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:32.874 15:36:54 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:32.874 15:36:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:32.874 15:36:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:32.874 15:36:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:32.874 15:36:54 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:32.874 15:36:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:32.874 15:36:54 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:32.874 15:36:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:32.874 15:36:54 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:32.874 15:36:54 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:33.132 [2024-07-24 15:36:54.674808] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64614) is not found. Dropping the request. 00:11:43.129 Executing: test_write_invalid_db 00:11:43.129 Waiting for AER completion... 00:11:43.129 Failure: test_write_invalid_db 00:11:43.129 00:11:43.129 Executing: test_invalid_db_write_overflow_sq 00:11:43.129 Waiting for AER completion... 00:11:43.129 Failure: test_invalid_db_write_overflow_sq 00:11:43.129 00:11:43.129 Executing: test_invalid_db_write_overflow_cq 00:11:43.129 Waiting for AER completion... 00:11:43.129 Failure: test_invalid_db_write_overflow_cq 00:11:43.129 00:11:43.129 15:37:04 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:43.129 15:37:04 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:11:43.129 [2024-07-24 15:37:04.667660] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64614) is not found. Dropping the request. 00:11:53.093 Executing: test_write_invalid_db 00:11:53.093 Waiting for AER completion... 00:11:53.093 Failure: test_write_invalid_db 00:11:53.093 00:11:53.093 Executing: test_invalid_db_write_overflow_sq 00:11:53.093 Waiting for AER completion... 00:11:53.093 Failure: test_invalid_db_write_overflow_sq 00:11:53.093 00:11:53.093 Executing: test_invalid_db_write_overflow_cq 00:11:53.093 Waiting for AER completion... 00:11:53.093 Failure: test_invalid_db_write_overflow_cq 00:11:53.093 00:11:53.093 15:37:14 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:53.093 15:37:14 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:53.351 [2024-07-24 15:37:14.725068] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64614) is not found. Dropping the request. 00:12:03.332 Executing: test_write_invalid_db 00:12:03.332 Waiting for AER completion... 00:12:03.332 Failure: test_write_invalid_db 00:12:03.332 00:12:03.332 Executing: test_invalid_db_write_overflow_sq 00:12:03.332 Waiting for AER completion... 00:12:03.332 Failure: test_invalid_db_write_overflow_sq 00:12:03.332 00:12:03.332 Executing: test_invalid_db_write_overflow_cq 00:12:03.332 Waiting for AER completion... 00:12:03.332 Failure: test_invalid_db_write_overflow_cq 00:12:03.332 00:12:03.332 15:37:24 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:03.332 15:37:24 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:12:03.332 [2024-07-24 15:37:24.836766] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64614) is not found. Dropping the request. 00:12:13.291 Executing: test_write_invalid_db 00:12:13.291 Waiting for AER completion... 00:12:13.291 Failure: test_write_invalid_db 00:12:13.291 00:12:13.291 Executing: test_invalid_db_write_overflow_sq 00:12:13.291 Waiting for AER completion... 00:12:13.291 Failure: test_invalid_db_write_overflow_sq 00:12:13.291 00:12:13.291 Executing: test_invalid_db_write_overflow_cq 00:12:13.291 Waiting for AER completion... 00:12:13.291 Failure: test_invalid_db_write_overflow_cq 00:12:13.291 00:12:13.291 ************************************ 00:12:13.291 END TEST nvme_doorbell_aers 00:12:13.291 ************************************ 00:12:13.291 00:12:13.291 real 0m40.231s 00:12:13.291 user 0m34.092s 00:12:13.291 sys 0m5.708s 00:12:13.291 15:37:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.291 15:37:34 -- common/autotest_common.sh@10 -- # set +x 00:12:13.291 15:37:34 -- nvme/nvme.sh@97 -- # uname 00:12:13.291 15:37:34 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:12:13.291 15:37:34 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:12:13.291 15:37:34 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:12:13.291 15:37:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:13.291 15:37:34 -- common/autotest_common.sh@10 -- # set +x 00:12:13.291 ************************************ 00:12:13.291 START TEST nvme_multi_aen 00:12:13.291 ************************************ 00:12:13.291 15:37:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:12:13.291 [2024-07-24 15:37:34.641758] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:13.291 [2024-07-24 15:37:34.642121] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.291 [2024-07-24 15:37:34.826759] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:12:13.291 [2024-07-24 15:37:34.827211] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64614) is not found. Dropping the request. 00:12:13.291 [2024-07-24 15:37:34.827494] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64614) is not found. Dropping the request. 00:12:13.291 [2024-07-24 15:37:34.827745] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64614) is not found. Dropping the request. 00:12:13.291 [2024-07-24 15:37:34.830014] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:12:13.291 [2024-07-24 15:37:34.830237] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64614) is not found. Dropping the request. 00:12:13.291 [2024-07-24 15:37:34.830574] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64614) is not found. Dropping the request. 00:12:13.291 [2024-07-24 15:37:34.830806] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64614) is not found. Dropping the request. 00:12:13.291 [2024-07-24 15:37:34.832642] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:12:13.291 [2024-07-24 15:37:34.832839] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64614) is not found. Dropping the request. 00:12:13.291 [2024-07-24 15:37:34.833100] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64614) is not found. Dropping the request. 00:12:13.291 [2024-07-24 15:37:34.833157] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64614) is not found. Dropping the request. 00:12:13.291 [2024-07-24 15:37:34.835151] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:12:13.291 [2024-07-24 15:37:34.835342] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64614) is not found. Dropping the request. 00:12:13.291 [2024-07-24 15:37:34.835606] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64614) is not found. Dropping the request. 00:12:13.291 [2024-07-24 15:37:34.835844] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64614) is not found. Dropping the request. 00:12:13.291 [2024-07-24 15:37:34.844464] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:13.291 [2024-07-24 15:37:34.844794] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 Child process pid: 65131 00:12:13.291 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.548 [Child] Asynchronous Event Request test 00:12:13.548 [Child] Attached to 0000:00:06.0 00:12:13.548 [Child] Attached to 0000:00:07.0 00:12:13.548 [Child] Attached to 0000:00:09.0 00:12:13.548 [Child] Attached to 0000:00:08.0 00:12:13.548 [Child] Registering asynchronous event callbacks... 00:12:13.548 [Child] Getting orig temperature thresholds of all controllers 00:12:13.548 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:13.548 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:13.548 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:13.548 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:13.548 [Child] Waiting for all controllers to trigger AER and reset threshold 00:12:13.548 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:13.548 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:13.548 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:13.548 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:13.548 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:13.548 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:13.548 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:13.548 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:13.548 [Child] Cleaning up... 00:12:13.548 Asynchronous Event Request test 00:12:13.548 Attached to 0000:00:06.0 00:12:13.548 Attached to 0000:00:07.0 00:12:13.548 Attached to 0000:00:09.0 00:12:13.548 Attached to 0000:00:08.0 00:12:13.548 Reset controller to setup AER completions for this process 00:12:13.548 Registering asynchronous event callbacks... 00:12:13.548 Getting orig temperature thresholds of all controllers 00:12:13.548 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:13.548 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:13.548 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:13.548 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:13.548 Setting all controllers temperature threshold low to trigger AER 00:12:13.548 Waiting for all controllers temperature threshold to be set lower 00:12:13.548 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:13.548 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:12:13.548 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:13.548 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:12:13.548 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:13.548 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:12:13.548 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:13.548 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:12:13.548 Waiting for all controllers to trigger AER and reset threshold 00:12:13.548 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:13.548 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:13.548 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:13.548 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:13.548 Cleaning up... 00:12:13.548 00:12:13.548 real 0m0.553s 00:12:13.548 user 0m0.192s 00:12:13.548 sys 0m0.243s 00:12:13.548 15:37:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.548 15:37:35 -- common/autotest_common.sh@10 -- # set +x 00:12:13.548 ************************************ 00:12:13.548 END TEST nvme_multi_aen 00:12:13.548 ************************************ 00:12:13.806 15:37:35 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:13.806 15:37:35 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:13.806 15:37:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:13.806 15:37:35 -- common/autotest_common.sh@10 -- # set +x 00:12:13.806 ************************************ 00:12:13.806 START TEST nvme_startup 00:12:13.806 ************************************ 00:12:13.806 15:37:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:14.063 Initializing NVMe Controllers 00:12:14.063 Attached to 0000:00:06.0 00:12:14.063 Attached to 0000:00:07.0 00:12:14.063 Attached to 0000:00:09.0 00:12:14.063 Attached to 0000:00:08.0 00:12:14.063 Initialization complete. 00:12:14.063 Time used:193887.266 (us). 00:12:14.063 ************************************ 00:12:14.063 END TEST nvme_startup 00:12:14.063 ************************************ 00:12:14.063 00:12:14.063 real 0m0.297s 00:12:14.063 user 0m0.113s 00:12:14.063 sys 0m0.132s 00:12:14.063 15:37:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.063 15:37:35 -- common/autotest_common.sh@10 -- # set +x 00:12:14.063 15:37:35 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:12:14.063 15:37:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:14.063 15:37:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:14.063 15:37:35 -- common/autotest_common.sh@10 -- # set +x 00:12:14.063 ************************************ 00:12:14.063 START TEST nvme_multi_secondary 00:12:14.063 ************************************ 00:12:14.063 15:37:35 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:12:14.063 15:37:35 -- nvme/nvme.sh@52 -- # pid0=65187 00:12:14.063 15:37:35 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:12:14.063 15:37:35 -- nvme/nvme.sh@54 -- # pid1=65188 00:12:14.063 15:37:35 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:12:14.063 15:37:35 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:18.241 Initializing NVMe Controllers 00:12:18.241 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:18.241 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:18.241 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:18.241 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:18.241 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:12:18.241 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:12:18.241 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:12:18.241 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:12:18.241 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:12:18.241 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:12:18.241 Initialization complete. Launching workers. 00:12:18.241 ======================================================== 00:12:18.241 Latency(us) 00:12:18.241 Device Information : IOPS MiB/s Average min max 00:12:18.241 PCIE (0000:00:06.0) NSID 1 from core 2: 2117.04 8.27 7546.98 1564.54 38498.14 00:12:18.241 PCIE (0000:00:07.0) NSID 1 from core 2: 2117.04 8.27 7542.11 1572.54 38499.46 00:12:18.241 PCIE (0000:00:09.0) NSID 1 from core 2: 2117.04 8.27 7530.18 1549.62 34202.79 00:12:18.241 PCIE (0000:00:08.0) NSID 1 from core 2: 2117.04 8.27 7512.50 1535.13 26574.36 00:12:18.241 PCIE (0000:00:08.0) NSID 2 from core 2: 2117.04 8.27 7494.77 1530.35 18823.97 00:12:18.241 PCIE (0000:00:08.0) NSID 3 from core 2: 2117.04 8.27 7477.57 1548.09 19072.44 00:12:18.241 ======================================================== 00:12:18.241 Total : 12702.25 49.62 7517.35 1530.35 38499.46 00:12:18.241 00:12:18.241 Initializing NVMe Controllers 00:12:18.241 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:18.241 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:18.241 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:18.241 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:18.241 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:12:18.241 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:12:18.241 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:12:18.241 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:12:18.241 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:12:18.241 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:12:18.241 Initialization complete. Launching workers. 00:12:18.241 ======================================================== 00:12:18.241 Latency(us) 00:12:18.241 Device Information : IOPS MiB/s Average min max 00:12:18.241 PCIE (0000:00:06.0) NSID 1 from core 1: 5099.31 19.92 3141.03 1021.53 10172.88 00:12:18.241 PCIE (0000:00:07.0) NSID 1 from core 1: 5099.31 19.92 3150.79 1056.73 23608.63 00:12:18.241 PCIE (0000:00:09.0) NSID 1 from core 1: 5099.31 19.92 3164.09 1055.45 33894.36 00:12:18.241 PCIE (0000:00:08.0) NSID 1 from core 1: 5099.31 19.92 3169.71 1066.67 33602.42 00:12:18.241 PCIE (0000:00:08.0) NSID 2 from core 1: 5099.31 19.92 3169.89 1043.33 33545.50 00:12:18.241 PCIE (0000:00:08.0) NSID 3 from core 1: 5099.31 19.92 3169.86 1054.25 33102.90 00:12:18.241 ======================================================== 00:12:18.241 Total : 30595.87 119.52 3160.89 1021.53 33894.36 00:12:18.241 00:12:18.241 15:37:39 -- nvme/nvme.sh@56 -- # wait 65187 00:12:19.633 Initializing NVMe Controllers 00:12:19.633 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:19.633 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:19.633 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:19.633 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:19.633 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:12:19.633 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:12:19.633 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:12:19.633 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:12:19.633 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:12:19.633 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:12:19.633 Initialization complete. Launching workers. 00:12:19.633 ======================================================== 00:12:19.633 Latency(us) 00:12:19.633 Device Information : IOPS MiB/s Average min max 00:12:19.633 PCIE (0000:00:06.0) NSID 1 from core 0: 6719.89 26.25 2378.97 965.53 10077.91 00:12:19.633 PCIE (0000:00:07.0) NSID 1 from core 0: 6719.89 26.25 2380.33 1011.69 10716.48 00:12:19.633 PCIE (0000:00:09.0) NSID 1 from core 0: 6719.89 26.25 2380.32 986.06 10849.47 00:12:19.633 PCIE (0000:00:08.0) NSID 1 from core 0: 6719.89 26.25 2380.31 991.59 10022.90 00:12:19.633 PCIE (0000:00:08.0) NSID 2 from core 0: 6719.89 26.25 2380.28 979.95 9502.70 00:12:19.633 PCIE (0000:00:08.0) NSID 3 from core 0: 6719.89 26.25 2380.23 984.77 9690.01 00:12:19.633 ======================================================== 00:12:19.633 Total : 40319.35 157.50 2380.07 965.53 10849.47 00:12:19.633 00:12:19.633 15:37:41 -- nvme/nvme.sh@57 -- # wait 65188 00:12:19.633 15:37:41 -- nvme/nvme.sh@61 -- # pid0=65263 00:12:19.633 15:37:41 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:12:19.633 15:37:41 -- nvme/nvme.sh@63 -- # pid1=65264 00:12:19.633 15:37:41 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:19.633 15:37:41 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:12:23.850 Initializing NVMe Controllers 00:12:23.850 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:23.850 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:23.850 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:23.850 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:23.850 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:12:23.850 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:12:23.850 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:12:23.850 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:12:23.850 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:12:23.850 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:12:23.850 Initialization complete. Launching workers. 00:12:23.850 ======================================================== 00:12:23.850 Latency(us) 00:12:23.850 Device Information : IOPS MiB/s Average min max 00:12:23.850 PCIE (0000:00:06.0) NSID 1 from core 1: 4847.11 18.93 3298.93 958.23 8975.20 00:12:23.850 PCIE (0000:00:07.0) NSID 1 from core 1: 4847.11 18.93 3300.47 987.99 9021.12 00:12:23.850 PCIE (0000:00:09.0) NSID 1 from core 1: 4847.11 18.93 3300.53 998.01 8392.31 00:12:23.850 PCIE (0000:00:08.0) NSID 1 from core 1: 4847.11 18.93 3300.58 1000.43 10011.31 00:12:23.850 PCIE (0000:00:08.0) NSID 2 from core 1: 4847.11 18.93 3300.53 978.98 9548.46 00:12:23.850 PCIE (0000:00:08.0) NSID 3 from core 1: 4847.11 18.93 3300.54 990.76 9196.04 00:12:23.850 ======================================================== 00:12:23.850 Total : 29082.65 113.60 3300.26 958.23 10011.31 00:12:23.850 00:12:23.850 Initializing NVMe Controllers 00:12:23.850 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:23.850 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:23.850 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:23.850 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:23.850 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:12:23.850 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:12:23.850 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:12:23.850 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:12:23.850 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:12:23.850 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:12:23.850 Initialization complete. Launching workers. 00:12:23.850 ======================================================== 00:12:23.850 Latency(us) 00:12:23.850 Device Information : IOPS MiB/s Average min max 00:12:23.850 PCIE (0000:00:06.0) NSID 1 from core 0: 5372.29 20.99 2976.38 1167.30 9954.21 00:12:23.850 PCIE (0000:00:07.0) NSID 1 from core 0: 5372.29 20.99 2977.77 1172.53 10082.23 00:12:23.850 PCIE (0000:00:09.0) NSID 1 from core 0: 5372.29 20.99 2978.04 1158.81 10485.56 00:12:23.850 PCIE (0000:00:08.0) NSID 1 from core 0: 5372.29 20.99 2978.03 1154.13 10450.32 00:12:23.850 PCIE (0000:00:08.0) NSID 2 from core 0: 5372.29 20.99 2978.03 1149.11 9448.67 00:12:23.850 PCIE (0000:00:08.0) NSID 3 from core 0: 5372.29 20.99 2978.04 1173.05 9883.38 00:12:23.850 ======================================================== 00:12:23.850 Total : 32233.77 125.91 2977.71 1149.11 10485.56 00:12:23.850 00:12:25.263 Initializing NVMe Controllers 00:12:25.263 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:25.263 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:25.263 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:25.263 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:25.263 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:12:25.263 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:12:25.263 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:12:25.263 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:12:25.263 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:12:25.263 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:12:25.263 Initialization complete. Launching workers. 00:12:25.263 ======================================================== 00:12:25.263 Latency(us) 00:12:25.263 Device Information : IOPS MiB/s Average min max 00:12:25.263 PCIE (0000:00:06.0) NSID 1 from core 2: 3257.16 12.72 4910.52 984.02 17200.24 00:12:25.263 PCIE (0000:00:07.0) NSID 1 from core 2: 3257.16 12.72 4911.17 983.67 16912.23 00:12:25.263 PCIE (0000:00:09.0) NSID 1 from core 2: 3257.16 12.72 4911.10 908.54 19851.93 00:12:25.263 PCIE (0000:00:08.0) NSID 1 from core 2: 3257.16 12.72 4911.50 832.20 19665.29 00:12:25.263 PCIE (0000:00:08.0) NSID 2 from core 2: 3257.16 12.72 4911.15 772.19 19919.42 00:12:25.263 PCIE (0000:00:08.0) NSID 3 from core 2: 3257.16 12.72 4910.58 713.29 19719.69 00:12:25.263 ======================================================== 00:12:25.263 Total : 19542.95 76.34 4911.00 713.29 19919.42 00:12:25.263 00:12:25.263 ************************************ 00:12:25.263 END TEST nvme_multi_secondary 00:12:25.263 ************************************ 00:12:25.263 15:37:46 -- nvme/nvme.sh@65 -- # wait 65263 00:12:25.263 15:37:46 -- nvme/nvme.sh@66 -- # wait 65264 00:12:25.263 00:12:25.263 real 0m11.119s 00:12:25.263 user 0m19.155s 00:12:25.263 sys 0m0.899s 00:12:25.263 15:37:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.263 15:37:46 -- common/autotest_common.sh@10 -- # set +x 00:12:25.263 15:37:46 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:12:25.263 15:37:46 -- nvme/nvme.sh@102 -- # kill_stub 00:12:25.263 15:37:46 -- common/autotest_common.sh@1065 -- # [[ -e /proc/64179 ]] 00:12:25.263 15:37:46 -- common/autotest_common.sh@1066 -- # kill 64179 00:12:25.263 15:37:46 -- common/autotest_common.sh@1067 -- # wait 64179 00:12:26.197 [2024-07-24 15:37:47.660154] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:12:26.197 [2024-07-24 15:37:47.660267] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:12:26.197 [2024-07-24 15:37:47.660307] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:12:26.197 [2024-07-24 15:37:47.660363] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:12:26.763 [2024-07-24 15:37:48.189802] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:12:26.763 [2024-07-24 15:37:48.189940] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:12:26.763 [2024-07-24 15:37:48.189975] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:12:26.763 [2024-07-24 15:37:48.190006] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:12:27.328 [2024-07-24 15:37:48.683146] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:12:27.328 [2024-07-24 15:37:48.683273] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:12:27.328 [2024-07-24 15:37:48.683314] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:12:27.328 [2024-07-24 15:37:48.683349] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:12:28.741 [2024-07-24 15:37:50.199936] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:12:28.741 [2024-07-24 15:37:50.200026] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:12:28.741 [2024-07-24 15:37:50.200056] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:12:28.741 [2024-07-24 15:37:50.200106] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:12:29.000 15:37:50 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:12:29.000 15:37:50 -- common/autotest_common.sh@1073 -- # echo 2 00:12:29.000 15:37:50 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:29.000 15:37:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:29.000 15:37:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:29.000 15:37:50 -- common/autotest_common.sh@10 -- # set +x 00:12:29.000 ************************************ 00:12:29.000 START TEST bdev_nvme_reset_stuck_adm_cmd 00:12:29.000 ************************************ 00:12:29.000 15:37:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:29.000 * Looking for test storage... 00:12:29.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:29.000 15:37:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:12:29.000 15:37:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:12:29.000 15:37:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:12:29.000 15:37:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:12:29.000 15:37:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:12:29.000 15:37:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:12:29.000 15:37:50 -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:29.000 15:37:50 -- common/autotest_common.sh@1509 -- # local bdfs 00:12:29.000 15:37:50 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:29.000 15:37:50 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:29.000 15:37:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:29.000 15:37:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:12:29.000 15:37:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:29.000 15:37:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:29.000 15:37:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:29.263 15:37:50 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:29.263 15:37:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:12:29.263 15:37:50 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:12:29.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.263 15:37:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:12:29.263 15:37:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:12:29.263 15:37:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65446 00:12:29.263 15:37:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:12:29.263 15:37:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:29.263 15:37:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65446 00:12:29.263 15:37:50 -- common/autotest_common.sh@819 -- # '[' -z 65446 ']' 00:12:29.263 15:37:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.263 15:37:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:29.263 15:37:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.263 15:37:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:29.263 15:37:50 -- common/autotest_common.sh@10 -- # set +x 00:12:29.263 [2024-07-24 15:37:50.731431] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:29.263 [2024-07-24 15:37:50.731965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65446 ] 00:12:29.521 [2024-07-24 15:37:50.964790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.778 [2024-07-24 15:37:51.153466] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:29.778 [2024-07-24 15:37:51.153936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.778 [2024-07-24 15:37:51.154059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.778 [2024-07-24 15:37:51.154140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.778 [2024-07-24 15:37:51.154146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.154 15:37:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:31.154 15:37:52 -- common/autotest_common.sh@852 -- # return 0 00:12:31.154 15:37:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:12:31.154 15:37:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.154 15:37:52 -- common/autotest_common.sh@10 -- # set +x 00:12:31.154 nvme0n1 00:12:31.154 15:37:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.154 15:37:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:31.154 15:37:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_7yw7l.txt 00:12:31.154 15:37:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:31.154 15:37:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.154 15:37:52 -- common/autotest_common.sh@10 -- # set +x 00:12:31.154 true 00:12:31.154 15:37:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.154 15:37:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:31.154 15:37:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721835472 00:12:31.154 15:37:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65482 00:12:31.154 15:37:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:31.154 15:37:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:31.154 15:37:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:33.105 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:33.105 15:37:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.105 15:37:54 -- common/autotest_common.sh@10 -- # set +x 00:12:33.105 [2024-07-24 15:37:54.642610] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:12:33.105 [2024-07-24 15:37:54.643148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:33.105 [2024-07-24 15:37:54.643204] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:33.105 [2024-07-24 15:37:54.643239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.105 [2024-07-24 15:37:54.645511] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:33.105 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65482 00:12:33.105 15:37:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.105 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65482 00:12:33.105 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65482 00:12:33.105 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:33.105 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:12:33.105 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:33.105 15:37:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.105 15:37:54 -- common/autotest_common.sh@10 -- # set +x 00:12:33.105 15:37:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.105 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:33.105 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_7yw7l.txt 00:12:33.362 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:33.362 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:33.362 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:33.362 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:33.362 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:33.362 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:33.362 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:33.362 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:33.362 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:33.362 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:33.362 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:33.362 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:33.363 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:33.363 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:33.363 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:33.363 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:33.363 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:33.363 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:33.363 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:33.363 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_7yw7l.txt 00:12:33.363 15:37:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65446 00:12:33.363 15:37:54 -- common/autotest_common.sh@926 -- # '[' -z 65446 ']' 00:12:33.363 15:37:54 -- common/autotest_common.sh@930 -- # kill -0 65446 00:12:33.363 15:37:54 -- common/autotest_common.sh@931 -- # uname 00:12:33.363 15:37:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:33.363 15:37:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65446 00:12:33.363 killing process with pid 65446 00:12:33.363 15:37:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:33.363 15:37:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:33.363 15:37:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65446' 00:12:33.363 15:37:54 -- common/autotest_common.sh@945 -- # kill 65446 00:12:33.363 15:37:54 -- common/autotest_common.sh@950 -- # wait 65446 00:12:35.889 15:37:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:35.889 15:37:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:35.889 00:12:35.889 real 0m6.399s 00:12:35.889 user 0m22.945s 00:12:35.889 sys 0m0.652s 00:12:35.889 15:37:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.889 15:37:56 -- common/autotest_common.sh@10 -- # set +x 00:12:35.889 ************************************ 00:12:35.889 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:35.889 ************************************ 00:12:35.889 15:37:56 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:35.889 15:37:56 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:35.889 15:37:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:35.889 15:37:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:35.889 15:37:56 -- common/autotest_common.sh@10 -- # set +x 00:12:35.889 ************************************ 00:12:35.889 START TEST nvme_fio 00:12:35.889 ************************************ 00:12:35.889 15:37:56 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:12:35.889 15:37:56 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:35.889 15:37:56 -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:35.889 15:37:56 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:35.889 15:37:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:35.889 15:37:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:12:35.889 15:37:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:35.889 15:37:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:35.889 15:37:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:35.889 15:37:56 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:35.889 15:37:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:12:35.889 15:37:56 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:12:35.889 15:37:56 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:35.889 15:37:56 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:35.889 15:37:56 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:12:35.889 15:37:56 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:35.889 15:37:57 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:12:35.889 15:37:57 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:36.146 15:37:57 -- nvme/nvme.sh@41 -- # bs=4096 00:12:36.146 15:37:57 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:12:36.146 15:37:57 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:12:36.146 15:37:57 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:36.146 15:37:57 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:36.146 15:37:57 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:36.146 15:37:57 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:36.146 15:37:57 -- common/autotest_common.sh@1320 -- # shift 00:12:36.146 15:37:57 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:36.146 15:37:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:36.146 15:37:57 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:36.146 15:37:57 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:36.146 15:37:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:36.146 15:37:57 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:36.146 15:37:57 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:36.147 15:37:57 -- common/autotest_common.sh@1326 -- # break 00:12:36.147 15:37:57 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:36.147 15:37:57 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:12:36.147 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:36.147 fio-3.35 00:12:36.147 Starting 1 thread 00:12:39.427 00:12:39.427 test: (groupid=0, jobs=1): err= 0: pid=65626: Wed Jul 24 15:38:00 2024 00:12:39.427 read: IOPS=14.9k, BW=58.2MiB/s (61.0MB/s)(116MiB/2001msec) 00:12:39.427 slat (nsec): min=4578, max=53456, avg=6733.97, stdev=2889.66 00:12:39.427 clat (usec): min=295, max=11809, avg=4284.09, stdev=952.22 00:12:39.427 lat (usec): min=302, max=11863, avg=4290.82, stdev=953.90 00:12:39.427 clat percentiles (usec): 00:12:39.427 | 1.00th=[ 2835], 5.00th=[ 3326], 10.00th=[ 3523], 20.00th=[ 3654], 00:12:39.427 | 30.00th=[ 3818], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4113], 00:12:39.427 | 70.00th=[ 4228], 80.00th=[ 4621], 90.00th=[ 5932], 95.00th=[ 6259], 00:12:39.427 | 99.00th=[ 7177], 99.50th=[ 7373], 99.90th=[ 8291], 99.95th=[ 9634], 00:12:39.427 | 99.99th=[11600] 00:12:39.427 bw ( KiB/s): min=54968, max=63232, per=98.51%, avg=58706.67, stdev=4187.79, samples=3 00:12:39.427 iops : min=13742, max=15808, avg=14677.33, stdev=1046.76, samples=3 00:12:39.427 write: IOPS=14.9k, BW=58.2MiB/s (61.0MB/s)(117MiB/2001msec); 0 zone resets 00:12:39.427 slat (nsec): min=4654, max=44450, avg=6856.51, stdev=2860.66 00:12:39.427 clat (usec): min=275, max=11586, avg=4273.55, stdev=947.45 00:12:39.427 lat (usec): min=281, max=11605, avg=4280.41, stdev=949.14 00:12:39.427 clat percentiles (usec): 00:12:39.427 | 1.00th=[ 2802], 5.00th=[ 3359], 10.00th=[ 3523], 20.00th=[ 3654], 00:12:39.427 | 30.00th=[ 3818], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4080], 00:12:39.427 | 70.00th=[ 4228], 80.00th=[ 4555], 90.00th=[ 5932], 95.00th=[ 6259], 00:12:39.427 | 99.00th=[ 7177], 99.50th=[ 7373], 99.90th=[ 8586], 99.95th=[ 9896], 00:12:39.427 | 99.99th=[11338] 00:12:39.427 bw ( KiB/s): min=55320, max=62336, per=98.21%, avg=58549.33, stdev=3541.05, samples=3 00:12:39.427 iops : min=13830, max=15584, avg=14637.33, stdev=885.26, samples=3 00:12:39.427 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:12:39.427 lat (msec) : 2=0.11%, 4=51.00%, 10=48.80%, 20=0.05% 00:12:39.427 cpu : usr=98.95%, sys=0.00%, ctx=3, majf=0, minf=608 00:12:39.427 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:39.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:39.427 issued rwts: total=29812,29824,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.427 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:39.427 00:12:39.427 Run status group 0 (all jobs): 00:12:39.427 READ: bw=58.2MiB/s (61.0MB/s), 58.2MiB/s-58.2MiB/s (61.0MB/s-61.0MB/s), io=116MiB (122MB), run=2001-2001msec 00:12:39.427 WRITE: bw=58.2MiB/s (61.0MB/s), 58.2MiB/s-58.2MiB/s (61.0MB/s-61.0MB/s), io=117MiB (122MB), run=2001-2001msec 00:12:39.427 ----------------------------------------------------- 00:12:39.427 Suppressions used: 00:12:39.427 count bytes template 00:12:39.427 1 32 /usr/src/fio/parse.c 00:12:39.427 1 8 libtcmalloc_minimal.so 00:12:39.427 ----------------------------------------------------- 00:12:39.427 00:12:39.427 15:38:00 -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:39.427 15:38:00 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:39.427 15:38:00 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:12:39.427 15:38:00 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:39.685 15:38:01 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:12:39.685 15:38:01 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:39.942 15:38:01 -- nvme/nvme.sh@41 -- # bs=4096 00:12:39.942 15:38:01 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:12:39.942 15:38:01 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:12:39.942 15:38:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:39.942 15:38:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:39.942 15:38:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:39.942 15:38:01 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:39.942 15:38:01 -- common/autotest_common.sh@1320 -- # shift 00:12:39.942 15:38:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:39.942 15:38:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:39.942 15:38:01 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:39.942 15:38:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:39.942 15:38:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:39.942 15:38:01 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:39.942 15:38:01 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:39.942 15:38:01 -- common/autotest_common.sh@1326 -- # break 00:12:39.942 15:38:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:39.942 15:38:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:12:40.200 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:40.200 fio-3.35 00:12:40.200 Starting 1 thread 00:12:43.478 00:12:43.478 test: (groupid=0, jobs=1): err= 0: pid=65688: Wed Jul 24 15:38:04 2024 00:12:43.478 read: IOPS=13.6k, BW=53.3MiB/s (55.9MB/s)(107MiB/2001msec) 00:12:43.478 slat (nsec): min=4617, max=47037, avg=7469.35, stdev=3361.94 00:12:43.478 clat (usec): min=305, max=9756, avg=4676.23, stdev=1328.93 00:12:43.478 lat (usec): min=310, max=9803, avg=4683.70, stdev=1331.01 00:12:43.478 clat percentiles (usec): 00:12:43.478 | 1.00th=[ 2638], 5.00th=[ 3261], 10.00th=[ 3425], 20.00th=[ 3621], 00:12:43.478 | 30.00th=[ 3785], 40.00th=[ 3916], 50.00th=[ 4080], 60.00th=[ 4359], 00:12:43.478 | 70.00th=[ 5342], 80.00th=[ 6259], 90.00th=[ 6783], 95.00th=[ 7046], 00:12:43.478 | 99.00th=[ 7504], 99.50th=[ 7701], 99.90th=[ 8586], 99.95th=[ 9110], 00:12:43.478 | 99.99th=[ 9634] 00:12:43.478 bw ( KiB/s): min=51752, max=57136, per=99.99%, avg=54562.67, stdev=2699.84, samples=3 00:12:43.478 iops : min=12938, max=14284, avg=13640.67, stdev=674.96, samples=3 00:12:43.478 write: IOPS=13.6k, BW=53.2MiB/s (55.8MB/s)(107MiB/2001msec); 0 zone resets 00:12:43.478 slat (usec): min=4, max=159, avg= 7.60, stdev= 3.51 00:12:43.479 clat (usec): min=256, max=9600, avg=4678.31, stdev=1328.13 00:12:43.479 lat (usec): min=261, max=9608, avg=4685.92, stdev=1330.23 00:12:43.479 clat percentiles (usec): 00:12:43.479 | 1.00th=[ 2638], 5.00th=[ 3261], 10.00th=[ 3458], 20.00th=[ 3621], 00:12:43.479 | 30.00th=[ 3785], 40.00th=[ 3916], 50.00th=[ 4080], 60.00th=[ 4359], 00:12:43.479 | 70.00th=[ 5407], 80.00th=[ 6259], 90.00th=[ 6783], 95.00th=[ 7111], 00:12:43.479 | 99.00th=[ 7504], 99.50th=[ 7767], 99.90th=[ 8586], 99.95th=[ 8979], 00:12:43.479 | 99.99th=[ 9372] 00:12:43.479 bw ( KiB/s): min=51344, max=57456, per=100.00%, avg=54581.33, stdev=3072.10, samples=3 00:12:43.479 iops : min=12836, max=14364, avg=13645.33, stdev=768.02, samples=3 00:12:43.479 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:12:43.479 lat (msec) : 2=0.34%, 4=45.58%, 10=54.04% 00:12:43.479 cpu : usr=98.65%, sys=0.20%, ctx=3, majf=0, minf=607 00:12:43.479 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:43.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:43.479 issued rwts: total=27297,27274,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:43.479 00:12:43.479 Run status group 0 (all jobs): 00:12:43.479 READ: bw=53.3MiB/s (55.9MB/s), 53.3MiB/s-53.3MiB/s (55.9MB/s-55.9MB/s), io=107MiB (112MB), run=2001-2001msec 00:12:43.479 WRITE: bw=53.2MiB/s (55.8MB/s), 53.2MiB/s-53.2MiB/s (55.8MB/s-55.8MB/s), io=107MiB (112MB), run=2001-2001msec 00:12:43.479 ----------------------------------------------------- 00:12:43.479 Suppressions used: 00:12:43.479 count bytes template 00:12:43.479 1 32 /usr/src/fio/parse.c 00:12:43.479 1 8 libtcmalloc_minimal.so 00:12:43.479 ----------------------------------------------------- 00:12:43.479 00:12:43.479 15:38:04 -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:43.479 15:38:04 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:43.479 15:38:04 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:12:43.479 15:38:04 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:43.737 15:38:05 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:12:43.737 15:38:05 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:43.995 15:38:05 -- nvme/nvme.sh@41 -- # bs=4096 00:12:43.995 15:38:05 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:12:43.995 15:38:05 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:12:43.995 15:38:05 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:43.995 15:38:05 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:43.995 15:38:05 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:43.995 15:38:05 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:43.995 15:38:05 -- common/autotest_common.sh@1320 -- # shift 00:12:43.995 15:38:05 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:43.995 15:38:05 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:43.995 15:38:05 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:43.995 15:38:05 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:43.995 15:38:05 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:43.995 15:38:05 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:43.995 15:38:05 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:43.995 15:38:05 -- common/autotest_common.sh@1326 -- # break 00:12:43.995 15:38:05 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:43.995 15:38:05 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:12:44.253 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:44.253 fio-3.35 00:12:44.253 Starting 1 thread 00:12:47.534 00:12:47.534 test: (groupid=0, jobs=1): err= 0: pid=65753: Wed Jul 24 15:38:08 2024 00:12:47.534 read: IOPS=13.0k, BW=50.9MiB/s (53.4MB/s)(102MiB/2001msec) 00:12:47.534 slat (usec): min=4, max=104, avg= 7.52, stdev= 3.28 00:12:47.534 clat (usec): min=469, max=10554, avg=4884.87, stdev=1265.42 00:12:47.534 lat (usec): min=480, max=10609, avg=4892.39, stdev=1267.30 00:12:47.534 clat percentiles (usec): 00:12:47.534 | 1.00th=[ 3064], 5.00th=[ 3654], 10.00th=[ 3818], 20.00th=[ 4015], 00:12:47.534 | 30.00th=[ 4146], 40.00th=[ 4293], 50.00th=[ 4424], 60.00th=[ 4555], 00:12:47.534 | 70.00th=[ 4817], 80.00th=[ 6390], 90.00th=[ 7111], 95.00th=[ 7504], 00:12:47.534 | 99.00th=[ 7963], 99.50th=[ 8160], 99.90th=[ 9110], 99.95th=[ 9634], 00:12:47.534 | 99.99th=[10421] 00:12:47.534 bw ( KiB/s): min=49488, max=54376, per=100.00%, avg=52725.33, stdev=2803.80, samples=3 00:12:47.534 iops : min=12372, max=13594, avg=13181.33, stdev=700.95, samples=3 00:12:47.535 write: IOPS=13.0k, BW=50.9MiB/s (53.4MB/s)(102MiB/2001msec); 0 zone resets 00:12:47.535 slat (nsec): min=4817, max=68880, avg=7713.39, stdev=3250.36 00:12:47.535 clat (usec): min=619, max=10365, avg=4905.25, stdev=1283.34 00:12:47.535 lat (usec): min=629, max=10376, avg=4912.97, stdev=1285.27 00:12:47.535 clat percentiles (usec): 00:12:47.535 | 1.00th=[ 3032], 5.00th=[ 3654], 10.00th=[ 3818], 20.00th=[ 4015], 00:12:47.535 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4424], 60.00th=[ 4555], 00:12:47.535 | 70.00th=[ 4817], 80.00th=[ 6456], 90.00th=[ 7177], 95.00th=[ 7504], 00:12:47.535 | 99.00th=[ 8029], 99.50th=[ 8225], 99.90th=[ 9241], 99.95th=[ 9634], 00:12:47.535 | 99.99th=[10159] 00:12:47.535 bw ( KiB/s): min=49864, max=54448, per=100.00%, avg=52765.33, stdev=2523.32, samples=3 00:12:47.535 iops : min=12466, max=13612, avg=13191.33, stdev=630.83, samples=3 00:12:47.535 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:12:47.535 lat (msec) : 2=0.25%, 4=19.45%, 10=80.26%, 20=0.03% 00:12:47.535 cpu : usr=98.80%, sys=0.10%, ctx=4, majf=0, minf=607 00:12:47.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:47.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:47.535 issued rwts: total=26070,26066,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.535 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:47.535 00:12:47.535 Run status group 0 (all jobs): 00:12:47.535 READ: bw=50.9MiB/s (53.4MB/s), 50.9MiB/s-50.9MiB/s (53.4MB/s-53.4MB/s), io=102MiB (107MB), run=2001-2001msec 00:12:47.535 WRITE: bw=50.9MiB/s (53.4MB/s), 50.9MiB/s-50.9MiB/s (53.4MB/s-53.4MB/s), io=102MiB (107MB), run=2001-2001msec 00:12:47.535 ----------------------------------------------------- 00:12:47.535 Suppressions used: 00:12:47.535 count bytes template 00:12:47.535 1 32 /usr/src/fio/parse.c 00:12:47.535 1 8 libtcmalloc_minimal.so 00:12:47.535 ----------------------------------------------------- 00:12:47.535 00:12:47.535 15:38:08 -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:47.535 15:38:08 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:47.535 15:38:08 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:47.535 15:38:08 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:12:47.792 15:38:09 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:12:47.792 15:38:09 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:48.050 15:38:09 -- nvme/nvme.sh@41 -- # bs=4096 00:12:48.050 15:38:09 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:12:48.050 15:38:09 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:12:48.050 15:38:09 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:48.050 15:38:09 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:48.050 15:38:09 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:48.050 15:38:09 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:48.050 15:38:09 -- common/autotest_common.sh@1320 -- # shift 00:12:48.050 15:38:09 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:48.050 15:38:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:48.050 15:38:09 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:48.050 15:38:09 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:48.050 15:38:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:48.050 15:38:09 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:48.050 15:38:09 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:48.050 15:38:09 -- common/autotest_common.sh@1326 -- # break 00:12:48.050 15:38:09 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:48.050 15:38:09 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:12:48.050 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:48.050 fio-3.35 00:12:48.050 Starting 1 thread 00:12:52.232 00:12:52.232 test: (groupid=0, jobs=1): err= 0: pid=65808: Wed Jul 24 15:38:13 2024 00:12:52.232 read: IOPS=15.0k, BW=58.7MiB/s (61.6MB/s)(118MiB/2001msec) 00:12:52.232 slat (nsec): min=4587, max=59631, avg=6637.06, stdev=2235.09 00:12:52.232 clat (usec): min=295, max=9079, avg=4236.36, stdev=787.53 00:12:52.232 lat (usec): min=308, max=9139, avg=4243.00, stdev=788.35 00:12:52.232 clat percentiles (usec): 00:12:52.232 | 1.00th=[ 2606], 5.00th=[ 3064], 10.00th=[ 3261], 20.00th=[ 3589], 00:12:52.232 | 30.00th=[ 3818], 40.00th=[ 4047], 50.00th=[ 4293], 60.00th=[ 4424], 00:12:52.232 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 5014], 95.00th=[ 5342], 00:12:52.232 | 99.00th=[ 6849], 99.50th=[ 7242], 99.90th=[ 8094], 99.95th=[ 8356], 00:12:52.232 | 99.99th=[ 8848] 00:12:52.232 bw ( KiB/s): min=53216, max=66720, per=99.29%, avg=59722.67, stdev=6765.36, samples=3 00:12:52.232 iops : min=13304, max=16680, avg=14930.67, stdev=1691.34, samples=3 00:12:52.232 write: IOPS=15.0k, BW=58.8MiB/s (61.6MB/s)(118MiB/2001msec); 0 zone resets 00:12:52.232 slat (nsec): min=4744, max=47263, avg=6903.77, stdev=2269.19 00:12:52.232 clat (usec): min=330, max=8953, avg=4240.64, stdev=789.09 00:12:52.232 lat (usec): min=337, max=8966, avg=4247.55, stdev=789.97 00:12:52.232 clat percentiles (usec): 00:12:52.232 | 1.00th=[ 2638], 5.00th=[ 3032], 10.00th=[ 3261], 20.00th=[ 3621], 00:12:52.232 | 30.00th=[ 3818], 40.00th=[ 4080], 50.00th=[ 4359], 60.00th=[ 4424], 00:12:52.232 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 5014], 95.00th=[ 5407], 00:12:52.232 | 99.00th=[ 6849], 99.50th=[ 7177], 99.90th=[ 8029], 99.95th=[ 8225], 00:12:52.232 | 99.99th=[ 8586] 00:12:52.232 bw ( KiB/s): min=52992, max=66904, per=98.90%, avg=59506.67, stdev=6997.88, samples=3 00:12:52.232 iops : min=13248, max=16726, avg=14876.67, stdev=1749.47, samples=3 00:12:52.232 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:12:52.232 lat (msec) : 2=0.11%, 4=37.25%, 10=62.61% 00:12:52.232 cpu : usr=98.65%, sys=0.20%, ctx=7, majf=0, minf=605 00:12:52.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:52.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:52.232 issued rwts: total=30091,30098,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:52.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:52.232 00:12:52.232 Run status group 0 (all jobs): 00:12:52.232 READ: bw=58.7MiB/s (61.6MB/s), 58.7MiB/s-58.7MiB/s (61.6MB/s-61.6MB/s), io=118MiB (123MB), run=2001-2001msec 00:12:52.232 WRITE: bw=58.8MiB/s (61.6MB/s), 58.8MiB/s-58.8MiB/s (61.6MB/s-61.6MB/s), io=118MiB (123MB), run=2001-2001msec 00:12:52.232 ----------------------------------------------------- 00:12:52.232 Suppressions used: 00:12:52.232 count bytes template 00:12:52.232 1 32 /usr/src/fio/parse.c 00:12:52.232 1 8 libtcmalloc_minimal.so 00:12:52.232 ----------------------------------------------------- 00:12:52.232 00:12:52.232 ************************************ 00:12:52.232 END TEST nvme_fio 00:12:52.232 ************************************ 00:12:52.232 15:38:13 -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:52.232 15:38:13 -- nvme/nvme.sh@46 -- # true 00:12:52.232 00:12:52.232 real 0m16.874s 00:12:52.232 user 0m13.628s 00:12:52.232 sys 0m1.641s 00:12:52.232 15:38:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:52.232 15:38:13 -- common/autotest_common.sh@10 -- # set +x 00:12:52.490 ************************************ 00:12:52.490 END TEST nvme 00:12:52.490 ************************************ 00:12:52.490 00:12:52.490 real 1m35.219s 00:12:52.490 user 3m49.846s 00:12:52.490 sys 0m13.546s 00:12:52.490 15:38:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:52.490 15:38:13 -- common/autotest_common.sh@10 -- # set +x 00:12:52.490 15:38:13 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:12:52.490 15:38:13 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:52.490 15:38:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:52.490 15:38:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:52.490 15:38:13 -- common/autotest_common.sh@10 -- # set +x 00:12:52.490 ************************************ 00:12:52.490 START TEST nvme_scc 00:12:52.490 ************************************ 00:12:52.490 15:38:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:52.491 * Looking for test storage... 00:12:52.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:52.491 15:38:13 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:52.491 15:38:13 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:52.491 15:38:13 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:52.491 15:38:13 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:52.491 15:38:13 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:52.491 15:38:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.491 15:38:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.491 15:38:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.491 15:38:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.491 15:38:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.491 15:38:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.491 15:38:13 -- paths/export.sh@5 -- # export PATH 00:12:52.491 15:38:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.491 15:38:13 -- nvme/functions.sh@10 -- # ctrls=() 00:12:52.491 15:38:13 -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:52.491 15:38:13 -- nvme/functions.sh@11 -- # nvmes=() 00:12:52.491 15:38:13 -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:52.491 15:38:13 -- nvme/functions.sh@12 -- # bdfs=() 00:12:52.491 15:38:13 -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:52.491 15:38:13 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:52.491 15:38:13 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:52.491 15:38:13 -- nvme/functions.sh@14 -- # nvme_name= 00:12:52.491 15:38:13 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:52.491 15:38:13 -- nvme/nvme_scc.sh@12 -- # uname 00:12:52.491 15:38:13 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:12:52.491 15:38:13 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:12:52.491 15:38:13 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:53.056 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:53.056 Waiting for block devices as requested 00:12:53.056 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:12:53.313 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:12:53.313 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:12:53.313 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:12:58.585 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:12:58.585 15:38:19 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:12:58.585 15:38:19 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:58.585 15:38:19 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:58.585 15:38:19 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:58.585 15:38:19 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:12:58.585 15:38:19 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:12:58.585 15:38:19 -- scripts/common.sh@15 -- # local i 00:12:58.585 15:38:19 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:12:58.585 15:38:19 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:58.585 15:38:19 -- scripts/common.sh@24 -- # return 0 00:12:58.585 15:38:19 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:58.585 15:38:19 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:58.585 15:38:19 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:58.585 15:38:19 -- nvme/functions.sh@18 -- # shift 00:12:58.585 15:38:19 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.585 15:38:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.585 15:38:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.585 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:58.585 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:58.585 15:38:19 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.585 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:58.585 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:58.585 15:38:19 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.585 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:58.585 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:12:58.585 15:38:19 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.585 15:38:19 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:58.585 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:58.585 15:38:19 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.585 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:58.585 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:58.585 15:38:19 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.585 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:58.585 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:58.585 15:38:19 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.585 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:58.585 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:58.585 15:38:19 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.585 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:58.585 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:12:58.585 15:38:19 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.585 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.585 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:58.585 15:38:19 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.585 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.585 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.586 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.586 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.586 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.587 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.587 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:58.587 15:38:19 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:58.588 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.588 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:58.588 15:38:19 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:58.588 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.588 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:58.588 15:38:19 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:58.588 15:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:19 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:58.588 15:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:58.588 15:38:20 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:58.588 15:38:20 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:58.588 15:38:20 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:12:58.588 15:38:20 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:58.588 15:38:20 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:58.588 15:38:20 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:12:58.588 15:38:20 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:12:58.588 15:38:20 -- scripts/common.sh@15 -- # local i 00:12:58.588 15:38:20 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:12:58.588 15:38:20 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:58.588 15:38:20 -- scripts/common.sh@24 -- # return 0 00:12:58.588 15:38:20 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:58.588 15:38:20 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:58.588 15:38:20 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@18 -- # shift 00:12:58.588 15:38:20 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.588 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.588 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:58.588 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.589 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.589 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.589 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.590 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.590 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:58.590 15:38:20 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:58.591 15:38:20 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:58.591 15:38:20 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:58.591 15:38:20 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:58.591 15:38:20 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@18 -- # shift 00:12:58.591 15:38:20 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.591 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:58.591 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:58.591 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:58.592 15:38:20 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:58.592 15:38:20 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:12:58.592 15:38:20 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:12:58.592 15:38:20 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@18 -- # shift 00:12:58.592 15:38:20 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.592 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.592 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:12:58.592 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:58.593 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.593 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.593 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:12:58.594 15:38:20 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:58.594 15:38:20 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:12:58.594 15:38:20 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:12:58.594 15:38:20 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@18 -- # shift 00:12:58.594 15:38:20 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:12:58.594 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.594 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.594 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.595 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.595 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.595 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.595 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.595 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.595 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.595 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.595 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.595 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.595 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.595 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.595 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.595 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.595 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.595 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:58.595 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.595 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.595 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:58.856 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:58.856 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:58.856 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.856 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.856 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:58.856 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:58.856 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:58.856 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.856 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.856 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:58.856 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:58.856 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:58.856 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.856 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.856 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:58.856 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:58.856 15:38:20 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:58.856 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.856 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.856 15:38:20 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:12:58.856 15:38:20 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:58.856 15:38:20 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:58.856 15:38:20 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:12:58.856 15:38:20 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:58.856 15:38:20 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:58.856 15:38:20 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:58.856 15:38:20 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:12:58.856 15:38:20 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:12:58.856 15:38:20 -- scripts/common.sh@15 -- # local i 00:12:58.856 15:38:20 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:12:58.856 15:38:20 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:58.856 15:38:20 -- scripts/common.sh@24 -- # return 0 00:12:58.856 15:38:20 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:58.856 15:38:20 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:58.856 15:38:20 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:58.856 15:38:20 -- nvme/functions.sh@18 -- # shift 00:12:58.856 15:38:20 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:58.856 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.856 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.856 15:38:20 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:58.856 15:38:20 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.856 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.856 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.856 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:58.856 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:58.856 15:38:20 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:58.856 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.856 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.856 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:58.856 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:58.856 15:38:20 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:58.856 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.856 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.856 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:58.856 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:12:58.856 15:38:20 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:12:58.856 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.856 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.856 15:38:20 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.857 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.857 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:58.857 15:38:20 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.858 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:58.858 15:38:20 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:58.858 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:58.859 15:38:20 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:58.859 15:38:20 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:58.859 15:38:20 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:58.859 15:38:20 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@18 -- # shift 00:12:58.859 15:38:20 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.859 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:58.859 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.859 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:58.860 15:38:20 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.860 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.860 15:38:20 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:58.860 15:38:20 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:58.860 15:38:20 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:58.861 15:38:20 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:12:58.861 15:38:20 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:58.861 15:38:20 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:58.861 15:38:20 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:12:58.861 15:38:20 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:12:58.861 15:38:20 -- scripts/common.sh@15 -- # local i 00:12:58.861 15:38:20 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:12:58.861 15:38:20 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:58.861 15:38:20 -- scripts/common.sh@24 -- # return 0 00:12:58.861 15:38:20 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:58.861 15:38:20 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:58.861 15:38:20 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@18 -- # shift 00:12:58.861 15:38:20 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.861 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.861 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.861 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.862 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:58.862 15:38:20 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:58.862 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:58.863 15:38:20 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:58.863 15:38:20 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:58.863 15:38:20 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:12:58.863 15:38:20 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:12:58.863 15:38:20 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:12:58.863 15:38:20 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@18 -- # shift 00:12:58.863 15:38:20 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.863 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.863 15:38:20 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.864 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:12:58.864 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.864 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.865 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.865 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.865 15:38:20 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.865 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.865 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.865 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.865 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.865 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.865 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.865 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.865 15:38:20 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:58.865 15:38:20 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # IFS=: 00:12:58.865 15:38:20 -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.865 15:38:20 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:12:58.865 15:38:20 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:58.865 15:38:20 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:58.865 15:38:20 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:12:58.865 15:38:20 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:58.865 15:38:20 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:58.865 15:38:20 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:12:58.865 15:38:20 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:12:58.865 15:38:20 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:58.865 15:38:20 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:12:58.865 15:38:20 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:58.865 15:38:20 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:12:58.865 15:38:20 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:12:58.865 15:38:20 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:58.865 15:38:20 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:12:58.865 15:38:20 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:12:58.865 15:38:20 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:12:58.865 15:38:20 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:12:58.865 15:38:20 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:12:58.865 15:38:20 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:12:58.865 15:38:20 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:58.865 15:38:20 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@76 -- # echo 0x15d 00:12:58.865 15:38:20 -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:58.865 15:38:20 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:58.865 15:38:20 -- nvme/functions.sh@197 -- # echo nvme1 00:12:58.865 15:38:20 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:58.865 15:38:20 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:12:58.865 15:38:20 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:12:58.865 15:38:20 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:12:58.865 15:38:20 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:12:58.865 15:38:20 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:12:58.865 15:38:20 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:12:58.865 15:38:20 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:58.865 15:38:20 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@76 -- # echo 0x15d 00:12:58.865 15:38:20 -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:58.865 15:38:20 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:58.865 15:38:20 -- nvme/functions.sh@197 -- # echo nvme0 00:12:58.865 15:38:20 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:58.865 15:38:20 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:12:58.865 15:38:20 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:12:58.865 15:38:20 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:12:58.865 15:38:20 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:12:58.865 15:38:20 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:12:58.865 15:38:20 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:12:58.865 15:38:20 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:58.865 15:38:20 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@76 -- # echo 0x15d 00:12:58.865 15:38:20 -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:58.865 15:38:20 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:58.865 15:38:20 -- nvme/functions.sh@197 -- # echo nvme3 00:12:58.865 15:38:20 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:58.865 15:38:20 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:12:58.865 15:38:20 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:12:58.865 15:38:20 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:12:58.865 15:38:20 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:12:58.865 15:38:20 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:12:58.865 15:38:20 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:12:58.865 15:38:20 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:58.865 15:38:20 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:58.865 15:38:20 -- nvme/functions.sh@76 -- # echo 0x15d 00:12:58.865 15:38:20 -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:58.865 15:38:20 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:58.865 15:38:20 -- nvme/functions.sh@197 -- # echo nvme2 00:12:58.865 15:38:20 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:12:58.865 15:38:20 -- nvme/functions.sh@206 -- # echo nvme1 00:12:58.865 15:38:20 -- nvme/functions.sh@207 -- # return 0 00:12:58.865 15:38:20 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:12:58.865 15:38:20 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:12:58.865 15:38:20 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:59.800 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:00.058 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:00.058 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:13:00.058 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:13:00.058 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:00.058 15:38:21 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:13:00.058 15:38:21 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:00.058 15:38:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:00.058 15:38:21 -- common/autotest_common.sh@10 -- # set +x 00:13:00.058 ************************************ 00:13:00.058 START TEST nvme_simple_copy 00:13:00.058 ************************************ 00:13:00.058 15:38:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:13:00.316 Initializing NVMe Controllers 00:13:00.316 Attaching to 0000:00:08.0 00:13:00.316 Controller supports SCC. Attached to 0000:00:08.0 00:13:00.316 Namespace ID: 1 size: 4GB 00:13:00.316 Initialization complete. 00:13:00.316 00:13:00.316 Controller QEMU NVMe Ctrl (12342 ) 00:13:00.316 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:00.316 Namespace Block Size:4096 00:13:00.316 Writing LBAs 0 to 63 with Random Data 00:13:00.316 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:00.316 LBAs matching Written Data: 64 00:13:00.575 00:13:00.575 real 0m0.315s 00:13:00.575 user 0m0.124s 00:13:00.575 sys 0m0.088s 00:13:00.575 15:38:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.575 15:38:21 -- common/autotest_common.sh@10 -- # set +x 00:13:00.575 ************************************ 00:13:00.575 END TEST nvme_simple_copy 00:13:00.575 ************************************ 00:13:00.575 00:13:00.575 real 0m8.084s 00:13:00.575 user 0m1.333s 00:13:00.575 sys 0m1.757s 00:13:00.575 15:38:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.575 15:38:21 -- common/autotest_common.sh@10 -- # set +x 00:13:00.575 ************************************ 00:13:00.575 END TEST nvme_scc 00:13:00.575 ************************************ 00:13:00.575 15:38:22 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:13:00.575 15:38:22 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:13:00.575 15:38:22 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:13:00.575 15:38:22 -- spdk/autotest.sh@238 -- # [[ 1 -eq 1 ]] 00:13:00.575 15:38:22 -- spdk/autotest.sh@239 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:00.575 15:38:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:00.575 15:38:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:00.575 15:38:22 -- common/autotest_common.sh@10 -- # set +x 00:13:00.575 ************************************ 00:13:00.575 START TEST nvme_fdp 00:13:00.575 ************************************ 00:13:00.575 15:38:22 -- common/autotest_common.sh@1104 -- # test/nvme/nvme_fdp.sh 00:13:00.575 * Looking for test storage... 00:13:00.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:00.575 15:38:22 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:00.575 15:38:22 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:00.575 15:38:22 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:00.575 15:38:22 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:00.575 15:38:22 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:00.575 15:38:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.575 15:38:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.575 15:38:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.575 15:38:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.575 15:38:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.575 15:38:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.575 15:38:22 -- paths/export.sh@5 -- # export PATH 00:13:00.575 15:38:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.575 15:38:22 -- nvme/functions.sh@10 -- # ctrls=() 00:13:00.575 15:38:22 -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:00.575 15:38:22 -- nvme/functions.sh@11 -- # nvmes=() 00:13:00.575 15:38:22 -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:00.575 15:38:22 -- nvme/functions.sh@12 -- # bdfs=() 00:13:00.575 15:38:22 -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:00.575 15:38:22 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:00.575 15:38:22 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:00.575 15:38:22 -- nvme/functions.sh@14 -- # nvme_name= 00:13:00.575 15:38:22 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:00.575 15:38:22 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:01.141 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:01.141 Waiting for block devices as requested 00:13:01.141 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:13:01.399 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:13:01.399 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:13:01.399 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:13:06.721 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:13:06.721 15:38:27 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:13:06.721 15:38:27 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:06.721 15:38:27 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:06.721 15:38:27 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:06.721 15:38:27 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:13:06.721 15:38:27 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:13:06.721 15:38:27 -- scripts/common.sh@15 -- # local i 00:13:06.721 15:38:27 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:13:06.721 15:38:27 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:06.721 15:38:27 -- scripts/common.sh@24 -- # return 0 00:13:06.721 15:38:27 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:06.721 15:38:28 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:06.721 15:38:28 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:06.721 15:38:28 -- nvme/functions.sh@18 -- # shift 00:13:06.721 15:38:28 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.721 15:38:28 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.721 15:38:28 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.721 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.721 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.721 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.721 15:38:28 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.721 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.721 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.721 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.721 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.721 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.721 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.721 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.721 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.721 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.721 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.721 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.721 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:06.721 15:38:28 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:06.721 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.722 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:06.722 15:38:28 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.722 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.723 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:06.723 15:38:28 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.723 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:06.724 15:38:28 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:06.724 15:38:28 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:06.724 15:38:28 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:13:06.724 15:38:28 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:06.724 15:38:28 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:06.724 15:38:28 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:13:06.724 15:38:28 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:13:06.724 15:38:28 -- scripts/common.sh@15 -- # local i 00:13:06.724 15:38:28 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:13:06.724 15:38:28 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:06.724 15:38:28 -- scripts/common.sh@24 -- # return 0 00:13:06.724 15:38:28 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:06.724 15:38:28 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:06.724 15:38:28 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@18 -- # shift 00:13:06.724 15:38:28 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.724 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:06.724 15:38:28 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.724 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.725 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.725 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:06.725 15:38:28 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.726 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:06.726 15:38:28 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.726 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:06.727 15:38:28 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:06.727 15:38:28 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:06.727 15:38:28 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:06.727 15:38:28 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@18 -- # shift 00:13:06.727 15:38:28 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.727 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:06.727 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.727 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:06.728 15:38:28 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:06.728 15:38:28 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:13:06.728 15:38:28 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:13:06.728 15:38:28 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@18 -- # shift 00:13:06.728 15:38:28 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.728 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.728 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:13:06.728 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.729 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:06.729 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.729 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:13:06.730 15:38:28 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:06.730 15:38:28 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:13:06.730 15:38:28 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:13:06.730 15:38:28 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@18 -- # shift 00:13:06.730 15:38:28 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:13:06.730 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.730 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.730 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:13:06.731 15:38:28 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:06.731 15:38:28 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:06.731 15:38:28 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:13:06.731 15:38:28 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:06.731 15:38:28 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:06.731 15:38:28 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:13:06.731 15:38:28 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:13:06.731 15:38:28 -- scripts/common.sh@15 -- # local i 00:13:06.731 15:38:28 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:13:06.731 15:38:28 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:06.731 15:38:28 -- scripts/common.sh@24 -- # return 0 00:13:06.731 15:38:28 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:06.731 15:38:28 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:06.731 15:38:28 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@18 -- # shift 00:13:06.731 15:38:28 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.731 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:13:06.731 15:38:28 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:13:06.731 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.732 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:06.732 15:38:28 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.732 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.733 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.733 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.733 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:06.734 15:38:28 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:06.734 15:38:28 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:06.734 15:38:28 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:06.734 15:38:28 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@18 -- # shift 00:13:06.734 15:38:28 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:06.734 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.734 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.734 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:06.735 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.735 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.735 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:06.736 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:06.736 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:06.736 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.736 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.736 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:06.736 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:06.736 15:38:28 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:06.736 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.736 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.736 15:38:28 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:06.736 15:38:28 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:06.736 15:38:28 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:06.736 15:38:28 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:13:06.736 15:38:28 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:06.736 15:38:28 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:06.736 15:38:28 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:06.736 15:38:28 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:13:06.736 15:38:28 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:13:06.736 15:38:28 -- scripts/common.sh@15 -- # local i 00:13:06.736 15:38:28 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:13:06.736 15:38:28 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:06.736 15:38:28 -- scripts/common.sh@24 -- # return 0 00:13:06.736 15:38:28 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:06.736 15:38:28 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:06.736 15:38:28 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:06.736 15:38:28 -- nvme/functions.sh@18 -- # shift 00:13:06.736 15:38:28 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:06.736 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.736 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.736 15:38:28 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:06.736 15:38:28 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.736 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.736 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.736 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:06.736 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:06.736 15:38:28 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:06.736 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.996 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.996 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.996 15:38:28 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.996 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.996 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.996 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.996 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.996 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.996 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.996 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.996 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.996 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.996 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.996 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.996 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.996 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.996 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.996 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:06.996 15:38:28 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.996 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.996 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.997 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.997 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.997 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:06.998 15:38:28 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.998 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.998 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:06.999 15:38:28 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:06.999 15:38:28 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:13:06.999 15:38:28 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:13:06.999 15:38:28 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@18 -- # shift 00:13:06.999 15:38:28 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.999 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:13:06.999 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:13:06.999 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:07.000 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:07.000 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:07.000 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:07.000 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:07.000 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:07.000 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:07.000 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:07.000 15:38:28 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:07.000 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:07.000 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:07.000 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:07.000 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:07.000 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:07.000 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:07.000 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:07.000 15:38:28 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:07.000 15:38:28 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # IFS=: 00:13:07.000 15:38:28 -- nvme/functions.sh@21 -- # read -r reg val 00:13:07.000 15:38:28 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:13:07.000 15:38:28 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:07.000 15:38:28 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:07.000 15:38:28 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:13:07.000 15:38:28 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:07.000 15:38:28 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:07.000 15:38:28 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:13:07.000 15:38:28 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:13:07.000 15:38:28 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:07.000 15:38:28 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:13:07.000 15:38:28 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:13:07.000 15:38:28 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:13:07.000 15:38:28 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:13:07.000 15:38:28 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:07.000 15:38:28 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:13:07.000 15:38:28 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:13:07.000 15:38:28 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:13:07.000 15:38:28 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:13:07.000 15:38:28 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:13:07.000 15:38:28 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:13:07.000 15:38:28 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:07.000 15:38:28 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@76 -- # echo 0x8000 00:13:07.000 15:38:28 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:07.000 15:38:28 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:07.000 15:38:28 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:07.000 15:38:28 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:13:07.000 15:38:28 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:13:07.000 15:38:28 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:13:07.000 15:38:28 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:13:07.000 15:38:28 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:13:07.000 15:38:28 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:13:07.000 15:38:28 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:07.000 15:38:28 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@76 -- # echo 0x88010 00:13:07.000 15:38:28 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:13:07.000 15:38:28 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:07.000 15:38:28 -- nvme/functions.sh@197 -- # echo nvme0 00:13:07.000 15:38:28 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:07.000 15:38:28 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:13:07.000 15:38:28 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:13:07.000 15:38:28 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:13:07.000 15:38:28 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:13:07.000 15:38:28 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:13:07.000 15:38:28 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:13:07.000 15:38:28 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:07.000 15:38:28 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@76 -- # echo 0x8000 00:13:07.000 15:38:28 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:07.000 15:38:28 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:07.000 15:38:28 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:07.000 15:38:28 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:13:07.000 15:38:28 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:13:07.000 15:38:28 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:13:07.000 15:38:28 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:13:07.000 15:38:28 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:13:07.000 15:38:28 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:13:07.000 15:38:28 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:07.000 15:38:28 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:07.000 15:38:28 -- nvme/functions.sh@76 -- # echo 0x8000 00:13:07.000 15:38:28 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:07.000 15:38:28 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:07.000 15:38:28 -- nvme/functions.sh@204 -- # trap - ERR 00:13:07.000 15:38:28 -- nvme/functions.sh@204 -- # print_backtrace 00:13:07.001 15:38:28 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:13:07.001 15:38:28 -- common/autotest_common.sh@1132 -- # return 0 00:13:07.001 15:38:28 -- nvme/functions.sh@204 -- # trap - ERR 00:13:07.001 15:38:28 -- nvme/functions.sh@204 -- # print_backtrace 00:13:07.001 15:38:28 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:13:07.001 15:38:28 -- common/autotest_common.sh@1132 -- # return 0 00:13:07.001 15:38:28 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:13:07.001 15:38:28 -- nvme/functions.sh@206 -- # echo nvme0 00:13:07.001 15:38:28 -- nvme/functions.sh@207 -- # return 0 00:13:07.001 15:38:28 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:13:07.001 15:38:28 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:13:07.001 15:38:28 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:07.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:07.934 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:07.934 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:13:07.934 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:13:08.192 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:08.192 15:38:29 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:13:08.192 15:38:29 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:08.192 15:38:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:08.192 15:38:29 -- common/autotest_common.sh@10 -- # set +x 00:13:08.192 ************************************ 00:13:08.192 START TEST nvme_flexible_data_placement 00:13:08.192 ************************************ 00:13:08.192 15:38:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:13:08.457 Initializing NVMe Controllers 00:13:08.457 Attaching to 0000:00:09.0 00:13:08.457 Controller supports FDP Attached to 0000:00:09.0 00:13:08.457 Namespace ID: 1 Endurance Group ID: 1 00:13:08.457 Initialization complete. 00:13:08.457 00:13:08.457 ================================== 00:13:08.457 == FDP tests for Namespace: #01 == 00:13:08.457 ================================== 00:13:08.457 00:13:08.457 Get Feature: FDP: 00:13:08.457 ================= 00:13:08.457 Enabled: Yes 00:13:08.457 FDP configuration Index: 0 00:13:08.457 00:13:08.457 FDP configurations log page 00:13:08.457 =========================== 00:13:08.457 Number of FDP configurations: 1 00:13:08.457 Version: 0 00:13:08.457 Size: 112 00:13:08.457 FDP Configuration Descriptor: 0 00:13:08.457 Descriptor Size: 96 00:13:08.457 Reclaim Group Identifier format: 2 00:13:08.457 FDP Volatile Write Cache: Not Present 00:13:08.457 FDP Configuration: Valid 00:13:08.457 Vendor Specific Size: 0 00:13:08.457 Number of Reclaim Groups: 2 00:13:08.457 Number of Recalim Unit Handles: 8 00:13:08.457 Max Placement Identifiers: 128 00:13:08.457 Number of Namespaces Suppprted: 256 00:13:08.457 Reclaim unit Nominal Size: 6000000 bytes 00:13:08.457 Estimated Reclaim Unit Time Limit: Not Reported 00:13:08.457 RUH Desc #000: RUH Type: Initially Isolated 00:13:08.457 RUH Desc #001: RUH Type: Initially Isolated 00:13:08.457 RUH Desc #002: RUH Type: Initially Isolated 00:13:08.457 RUH Desc #003: RUH Type: Initially Isolated 00:13:08.457 RUH Desc #004: RUH Type: Initially Isolated 00:13:08.457 RUH Desc #005: RUH Type: Initially Isolated 00:13:08.457 RUH Desc #006: RUH Type: Initially Isolated 00:13:08.457 RUH Desc #007: RUH Type: Initially Isolated 00:13:08.457 00:13:08.457 FDP reclaim unit handle usage log page 00:13:08.457 ====================================== 00:13:08.457 Number of Reclaim Unit Handles: 8 00:13:08.457 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:08.457 RUH Usage Desc #001: RUH Attributes: Unused 00:13:08.457 RUH Usage Desc #002: RUH Attributes: Unused 00:13:08.457 RUH Usage Desc #003: RUH Attributes: Unused 00:13:08.457 RUH Usage Desc #004: RUH Attributes: Unused 00:13:08.457 RUH Usage Desc #005: RUH Attributes: Unused 00:13:08.457 RUH Usage Desc #006: RUH Attributes: Unused 00:13:08.458 RUH Usage Desc #007: RUH Attributes: Unused 00:13:08.458 00:13:08.458 FDP statistics log page 00:13:08.458 ======================= 00:13:08.458 Host bytes with metadata written: 768290816 00:13:08.458 Media bytes with metadata written: 768569344 00:13:08.458 Media bytes erased: 0 00:13:08.458 00:13:08.458 FDP Reclaim unit handle status 00:13:08.458 ============================== 00:13:08.458 Number of RUHS descriptors: 2 00:13:08.458 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000234d 00:13:08.458 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:13:08.458 00:13:08.458 FDP write on placement id: 0 success 00:13:08.458 00:13:08.458 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:13:08.458 00:13:08.458 IO mgmt send: RUH update for Placement ID: #0 Success 00:13:08.458 00:13:08.458 Get Feature: FDP Events for Placement handle: #0 00:13:08.458 ======================== 00:13:08.458 Number of FDP Events: 6 00:13:08.458 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:13:08.458 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:13:08.458 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:13:08.458 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:13:08.458 FDP Event: #4 Type: Media Reallocated Enabled: No 00:13:08.458 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:13:08.458 00:13:08.458 FDP events log page 00:13:08.458 =================== 00:13:08.458 Number of FDP events: 1 00:13:08.458 FDP Event #0: 00:13:08.458 Event Type: RU Not Written to Capacity 00:13:08.458 Placement Identifier: Valid 00:13:08.458 NSID: Valid 00:13:08.458 Location: Valid 00:13:08.458 Placement Identifier: 0 00:13:08.458 Event Timestamp: c 00:13:08.458 Namespace Identifier: 1 00:13:08.458 Reclaim Group Identifier: 0 00:13:08.458 Reclaim Unit Handle Identifier: 0 00:13:08.458 00:13:08.458 FDP test passed 00:13:08.458 00:13:08.458 real 0m0.280s 00:13:08.458 user 0m0.094s 00:13:08.458 sys 0m0.084s 00:13:08.458 15:38:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.458 15:38:29 -- common/autotest_common.sh@10 -- # set +x 00:13:08.458 ************************************ 00:13:08.458 END TEST nvme_flexible_data_placement 00:13:08.458 ************************************ 00:13:08.458 00:13:08.458 real 0m7.945s 00:13:08.458 user 0m1.288s 00:13:08.458 sys 0m1.713s 00:13:08.458 15:38:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.458 15:38:29 -- common/autotest_common.sh@10 -- # set +x 00:13:08.458 ************************************ 00:13:08.458 END TEST nvme_fdp 00:13:08.458 ************************************ 00:13:08.458 15:38:29 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:13:08.458 15:38:29 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:08.458 15:38:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:08.458 15:38:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:08.458 15:38:29 -- common/autotest_common.sh@10 -- # set +x 00:13:08.458 ************************************ 00:13:08.458 START TEST nvme_rpc 00:13:08.458 ************************************ 00:13:08.458 15:38:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:08.715 * Looking for test storage... 00:13:08.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:08.715 15:38:30 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:08.715 15:38:30 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:13:08.715 15:38:30 -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:08.715 15:38:30 -- common/autotest_common.sh@1509 -- # local bdfs 00:13:08.715 15:38:30 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:08.715 15:38:30 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:08.715 15:38:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:08.715 15:38:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:13:08.715 15:38:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:08.715 15:38:30 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:08.715 15:38:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:08.715 15:38:30 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:08.715 15:38:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:13:08.715 15:38:30 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:13:08.715 15:38:30 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:13:08.715 15:38:30 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67200 00:13:08.715 15:38:30 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:08.715 15:38:30 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:08.715 15:38:30 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67200 00:13:08.715 15:38:30 -- common/autotest_common.sh@819 -- # '[' -z 67200 ']' 00:13:08.715 15:38:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.715 15:38:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:08.715 15:38:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.715 15:38:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:08.715 15:38:30 -- common/autotest_common.sh@10 -- # set +x 00:13:08.715 [2024-07-24 15:38:30.269406] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:08.715 [2024-07-24 15:38:30.269543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67200 ] 00:13:08.973 [2024-07-24 15:38:30.428833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:09.232 [2024-07-24 15:38:30.611787] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:09.232 [2024-07-24 15:38:30.612176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.232 [2024-07-24 15:38:30.612205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.605 15:38:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:10.606 15:38:31 -- common/autotest_common.sh@852 -- # return 0 00:13:10.606 15:38:31 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:13:10.606 Nvme0n1 00:13:10.606 15:38:32 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:13:10.606 15:38:32 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:13:10.863 request: 00:13:10.863 { 00:13:10.863 "filename": "non_existing_file", 00:13:10.863 "bdev_name": "Nvme0n1", 00:13:10.863 "method": "bdev_nvme_apply_firmware", 00:13:10.863 "req_id": 1 00:13:10.863 } 00:13:10.863 Got JSON-RPC error response 00:13:10.863 response: 00:13:10.863 { 00:13:10.863 "code": -32603, 00:13:10.863 "message": "open file failed." 00:13:10.863 } 00:13:10.863 15:38:32 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:13:10.863 15:38:32 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:13:10.863 15:38:32 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:11.121 15:38:32 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:11.121 15:38:32 -- nvme/nvme_rpc.sh@40 -- # killprocess 67200 00:13:11.121 15:38:32 -- common/autotest_common.sh@926 -- # '[' -z 67200 ']' 00:13:11.121 15:38:32 -- common/autotest_common.sh@930 -- # kill -0 67200 00:13:11.121 15:38:32 -- common/autotest_common.sh@931 -- # uname 00:13:11.121 15:38:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:11.121 15:38:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67200 00:13:11.121 15:38:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:11.121 15:38:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:11.121 killing process with pid 67200 00:13:11.121 15:38:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67200' 00:13:11.121 15:38:32 -- common/autotest_common.sh@945 -- # kill 67200 00:13:11.121 15:38:32 -- common/autotest_common.sh@950 -- # wait 67200 00:13:13.019 00:13:13.019 real 0m4.580s 00:13:13.019 user 0m8.860s 00:13:13.019 sys 0m0.574s 00:13:13.019 15:38:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.019 15:38:34 -- common/autotest_common.sh@10 -- # set +x 00:13:13.019 ************************************ 00:13:13.019 END TEST nvme_rpc 00:13:13.019 ************************************ 00:13:13.277 15:38:34 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:13.277 15:38:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:13.277 15:38:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:13.277 15:38:34 -- common/autotest_common.sh@10 -- # set +x 00:13:13.277 ************************************ 00:13:13.277 START TEST nvme_rpc_timeouts 00:13:13.277 ************************************ 00:13:13.277 15:38:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:13.277 * Looking for test storage... 00:13:13.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:13.277 15:38:34 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:13.277 15:38:34 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67278 00:13:13.277 15:38:34 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67278 00:13:13.277 15:38:34 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67301 00:13:13.277 15:38:34 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:13:13.277 15:38:34 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:13.277 15:38:34 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67301 00:13:13.277 15:38:34 -- common/autotest_common.sh@819 -- # '[' -z 67301 ']' 00:13:13.277 15:38:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.277 15:38:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:13.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.277 15:38:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.277 15:38:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:13.277 15:38:34 -- common/autotest_common.sh@10 -- # set +x 00:13:13.277 [2024-07-24 15:38:34.794638] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:13.277 [2024-07-24 15:38:34.794786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67301 ] 00:13:13.535 [2024-07-24 15:38:34.955448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:13.793 [2024-07-24 15:38:35.139639] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:13.793 [2024-07-24 15:38:35.140031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.793 [2024-07-24 15:38:35.140057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.165 15:38:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:15.165 15:38:36 -- common/autotest_common.sh@852 -- # return 0 00:13:15.165 Checking default timeout settings: 00:13:15.165 15:38:36 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:13:15.165 15:38:36 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:15.423 Making settings changes with rpc: 00:13:15.423 15:38:36 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:13:15.423 15:38:36 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:13:15.680 Check default vs. modified settings: 00:13:15.680 15:38:37 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:13:15.680 15:38:37 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67278 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67278 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:13:15.938 Setting action_on_timeout is changed as expected. 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67278 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67278 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:13:15.938 Setting timeout_us is changed as expected. 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67278 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67278 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:13:15.938 Setting timeout_admin_us is changed as expected. 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67278 /tmp/settings_modified_67278 00:13:15.938 15:38:37 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67301 00:13:15.938 15:38:37 -- common/autotest_common.sh@926 -- # '[' -z 67301 ']' 00:13:15.938 15:38:37 -- common/autotest_common.sh@930 -- # kill -0 67301 00:13:15.938 15:38:37 -- common/autotest_common.sh@931 -- # uname 00:13:15.938 15:38:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:15.938 15:38:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67301 00:13:15.938 15:38:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:15.938 killing process with pid 67301 00:13:15.938 15:38:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:15.938 15:38:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67301' 00:13:15.938 15:38:37 -- common/autotest_common.sh@945 -- # kill 67301 00:13:15.938 15:38:37 -- common/autotest_common.sh@950 -- # wait 67301 00:13:18.466 RPC TIMEOUT SETTING TEST PASSED. 00:13:18.466 15:38:39 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:18.466 00:13:18.466 real 0m4.918s 00:13:18.466 user 0m9.783s 00:13:18.466 sys 0m0.549s 00:13:18.466 15:38:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.466 15:38:39 -- common/autotest_common.sh@10 -- # set +x 00:13:18.466 ************************************ 00:13:18.466 END TEST nvme_rpc_timeouts 00:13:18.466 ************************************ 00:13:18.466 15:38:39 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:13:18.466 15:38:39 -- spdk/autotest.sh@255 -- # [[ 1 -eq 1 ]] 00:13:18.466 15:38:39 -- spdk/autotest.sh@256 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:18.467 15:38:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:18.467 15:38:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:18.467 15:38:39 -- common/autotest_common.sh@10 -- # set +x 00:13:18.467 ************************************ 00:13:18.467 START TEST nvme_xnvme 00:13:18.467 ************************************ 00:13:18.467 15:38:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:18.467 * Looking for test storage... 00:13:18.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:18.467 15:38:39 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:18.467 15:38:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.467 15:38:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.467 15:38:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.467 15:38:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.467 15:38:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.467 15:38:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.467 15:38:39 -- paths/export.sh@5 -- # export PATH 00:13:18.467 15:38:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:18.467 15:38:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:18.467 15:38:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:18.467 15:38:39 -- common/autotest_common.sh@10 -- # set +x 00:13:18.467 ************************************ 00:13:18.467 START TEST xnvme_to_malloc_dd_copy 00:13:18.467 ************************************ 00:13:18.467 15:38:39 -- common/autotest_common.sh@1104 -- # malloc_to_xnvme_copy 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:18.467 15:38:39 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:18.467 15:38:39 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:18.467 15:38:39 -- dd/common.sh@191 -- # return 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@18 -- # local io 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:18.467 15:38:39 -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:18.467 15:38:39 -- dd/common.sh@31 -- # xtrace_disable 00:13:18.467 15:38:39 -- common/autotest_common.sh@10 -- # set +x 00:13:18.467 { 00:13:18.467 "subsystems": [ 00:13:18.467 { 00:13:18.467 "subsystem": "bdev", 00:13:18.467 "config": [ 00:13:18.467 { 00:13:18.467 "params": { 00:13:18.467 "block_size": 512, 00:13:18.467 "num_blocks": 2097152, 00:13:18.467 "name": "malloc0" 00:13:18.467 }, 00:13:18.467 "method": "bdev_malloc_create" 00:13:18.467 }, 00:13:18.467 { 00:13:18.467 "params": { 00:13:18.467 "io_mechanism": "libaio", 00:13:18.467 "filename": "/dev/nullb0", 00:13:18.467 "name": "null0" 00:13:18.467 }, 00:13:18.467 "method": "bdev_xnvme_create" 00:13:18.467 }, 00:13:18.467 { 00:13:18.467 "method": "bdev_wait_for_examine" 00:13:18.467 } 00:13:18.467 ] 00:13:18.467 } 00:13:18.467 ] 00:13:18.467 } 00:13:18.467 [2024-07-24 15:38:39.801019] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:18.467 [2024-07-24 15:38:39.801193] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67443 ] 00:13:18.467 [2024-07-24 15:38:39.970429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.726 [2024-07-24 15:38:40.155644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.148  Copying: 176/1024 [MB] (176 MBps) Copying: 351/1024 [MB] (175 MBps) Copying: 524/1024 [MB] (172 MBps) Copying: 700/1024 [MB] (175 MBps) Copying: 875/1024 [MB] (175 MBps) Copying: 1024/1024 [MB] (average 174 MBps) 00:13:29.148 00:13:29.148 15:38:50 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:29.148 15:38:50 -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:29.148 15:38:50 -- dd/common.sh@31 -- # xtrace_disable 00:13:29.148 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:13:29.405 { 00:13:29.405 "subsystems": [ 00:13:29.405 { 00:13:29.405 "subsystem": "bdev", 00:13:29.405 "config": [ 00:13:29.405 { 00:13:29.405 "params": { 00:13:29.405 "block_size": 512, 00:13:29.405 "num_blocks": 2097152, 00:13:29.405 "name": "malloc0" 00:13:29.405 }, 00:13:29.405 "method": "bdev_malloc_create" 00:13:29.405 }, 00:13:29.405 { 00:13:29.405 "params": { 00:13:29.405 "io_mechanism": "libaio", 00:13:29.405 "filename": "/dev/nullb0", 00:13:29.405 "name": "null0" 00:13:29.405 }, 00:13:29.405 "method": "bdev_xnvme_create" 00:13:29.405 }, 00:13:29.405 { 00:13:29.405 "method": "bdev_wait_for_examine" 00:13:29.405 } 00:13:29.405 ] 00:13:29.405 } 00:13:29.405 ] 00:13:29.405 } 00:13:29.405 [2024-07-24 15:38:50.797612] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:29.405 [2024-07-24 15:38:50.797771] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67574 ] 00:13:29.405 [2024-07-24 15:38:50.969368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.664 [2024-07-24 15:38:51.190208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.680  Copying: 181/1024 [MB] (181 MBps) Copying: 365/1024 [MB] (183 MBps) Copying: 549/1024 [MB] (184 MBps) Copying: 732/1024 [MB] (183 MBps) Copying: 916/1024 [MB] (183 MBps) Copying: 1024/1024 [MB] (average 183 MBps) 00:13:40.680 00:13:40.680 15:39:01 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:40.680 15:39:01 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:40.680 15:39:01 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:40.680 15:39:01 -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:40.680 15:39:01 -- dd/common.sh@31 -- # xtrace_disable 00:13:40.680 15:39:01 -- common/autotest_common.sh@10 -- # set +x 00:13:40.680 { 00:13:40.680 "subsystems": [ 00:13:40.680 { 00:13:40.680 "subsystem": "bdev", 00:13:40.680 "config": [ 00:13:40.680 { 00:13:40.680 "params": { 00:13:40.680 "block_size": 512, 00:13:40.680 "num_blocks": 2097152, 00:13:40.680 "name": "malloc0" 00:13:40.680 }, 00:13:40.680 "method": "bdev_malloc_create" 00:13:40.680 }, 00:13:40.680 { 00:13:40.680 "params": { 00:13:40.680 "io_mechanism": "io_uring", 00:13:40.680 "filename": "/dev/nullb0", 00:13:40.680 "name": "null0" 00:13:40.680 }, 00:13:40.680 "method": "bdev_xnvme_create" 00:13:40.680 }, 00:13:40.680 { 00:13:40.680 "method": "bdev_wait_for_examine" 00:13:40.680 } 00:13:40.680 ] 00:13:40.680 } 00:13:40.680 ] 00:13:40.680 } 00:13:40.680 [2024-07-24 15:39:01.559872] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:40.680 [2024-07-24 15:39:01.560024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67689 ] 00:13:40.680 [2024-07-24 15:39:01.720987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.680 [2024-07-24 15:39:01.953316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.611  Copying: 184/1024 [MB] (184 MBps) Copying: 363/1024 [MB] (179 MBps) Copying: 549/1024 [MB] (185 MBps) Copying: 735/1024 [MB] (185 MBps) Copying: 921/1024 [MB] (186 MBps) Copying: 1024/1024 [MB] (average 184 MBps) 00:13:50.611 00:13:50.611 15:39:12 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:50.611 15:39:12 -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:50.611 15:39:12 -- dd/common.sh@31 -- # xtrace_disable 00:13:50.611 15:39:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.611 { 00:13:50.611 "subsystems": [ 00:13:50.611 { 00:13:50.611 "subsystem": "bdev", 00:13:50.611 "config": [ 00:13:50.611 { 00:13:50.611 "params": { 00:13:50.611 "block_size": 512, 00:13:50.611 "num_blocks": 2097152, 00:13:50.611 "name": "malloc0" 00:13:50.611 }, 00:13:50.611 "method": "bdev_malloc_create" 00:13:50.611 }, 00:13:50.611 { 00:13:50.611 "params": { 00:13:50.611 "io_mechanism": "io_uring", 00:13:50.611 "filename": "/dev/nullb0", 00:13:50.611 "name": "null0" 00:13:50.611 }, 00:13:50.611 "method": "bdev_xnvme_create" 00:13:50.611 }, 00:13:50.611 { 00:13:50.611 "method": "bdev_wait_for_examine" 00:13:50.611 } 00:13:50.611 ] 00:13:50.611 } 00:13:50.611 ] 00:13:50.611 } 00:13:50.869 [2024-07-24 15:39:12.241300] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:50.869 [2024-07-24 15:39:12.241493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67815 ] 00:13:50.869 [2024-07-24 15:39:12.422016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.127 [2024-07-24 15:39:12.678976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.157  Copying: 180/1024 [MB] (180 MBps) Copying: 367/1024 [MB] (187 MBps) Copying: 554/1024 [MB] (186 MBps) Copying: 742/1024 [MB] (188 MBps) Copying: 930/1024 [MB] (188 MBps) Copying: 1024/1024 [MB] (average 186 MBps) 00:14:02.157 00:14:02.157 15:39:22 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:14:02.157 15:39:22 -- dd/common.sh@195 -- # modprobe -r null_blk 00:14:02.157 00:14:02.157 real 0m43.228s 00:14:02.157 user 0m37.867s 00:14:02.157 sys 0m4.753s 00:14:02.157 15:39:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.157 15:39:22 -- common/autotest_common.sh@10 -- # set +x 00:14:02.157 ************************************ 00:14:02.157 END TEST xnvme_to_malloc_dd_copy 00:14:02.157 ************************************ 00:14:02.157 15:39:22 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:02.157 15:39:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:02.157 15:39:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:02.157 15:39:22 -- common/autotest_common.sh@10 -- # set +x 00:14:02.157 ************************************ 00:14:02.157 START TEST xnvme_bdevperf 00:14:02.157 ************************************ 00:14:02.157 15:39:22 -- common/autotest_common.sh@1104 -- # xnvme_bdevperf 00:14:02.157 15:39:22 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:14:02.157 15:39:22 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:14:02.157 15:39:22 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:14:02.157 15:39:22 -- dd/common.sh@191 -- # return 00:14:02.157 15:39:22 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:14:02.157 15:39:22 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:02.157 15:39:22 -- xnvme/xnvme.sh@60 -- # local io 00:14:02.157 15:39:22 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:14:02.157 15:39:22 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:14:02.157 15:39:22 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:14:02.157 15:39:22 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:14:02.157 15:39:22 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:14:02.157 15:39:22 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:02.157 15:39:22 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:02.157 15:39:22 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:02.157 15:39:22 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:02.157 15:39:22 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:02.157 15:39:22 -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:02.157 15:39:22 -- dd/common.sh@31 -- # xtrace_disable 00:14:02.157 15:39:22 -- common/autotest_common.sh@10 -- # set +x 00:14:02.157 { 00:14:02.157 "subsystems": [ 00:14:02.157 { 00:14:02.157 "subsystem": "bdev", 00:14:02.157 "config": [ 00:14:02.157 { 00:14:02.157 "params": { 00:14:02.157 "io_mechanism": "libaio", 00:14:02.157 "filename": "/dev/nullb0", 00:14:02.157 "name": "null0" 00:14:02.157 }, 00:14:02.157 "method": "bdev_xnvme_create" 00:14:02.157 }, 00:14:02.157 { 00:14:02.157 "method": "bdev_wait_for_examine" 00:14:02.157 } 00:14:02.157 ] 00:14:02.157 } 00:14:02.157 ] 00:14:02.157 } 00:14:02.157 [2024-07-24 15:39:23.061043] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:02.157 [2024-07-24 15:39:23.061200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67953 ] 00:14:02.157 [2024-07-24 15:39:23.224012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.157 [2024-07-24 15:39:23.407374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.157 Running I/O for 5 seconds... 00:14:07.448 00:14:07.448 Latency(us) 00:14:07.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.448 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:07.448 null0 : 5.00 111649.85 436.13 0.00 0.00 569.85 171.29 975.59 00:14:07.448 =================================================================================================================== 00:14:07.448 Total : 111649.85 436.13 0.00 0.00 569.85 171.29 975.59 00:14:08.382 15:39:29 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:08.382 15:39:29 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:08.382 15:39:29 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:08.382 15:39:29 -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:08.382 15:39:29 -- dd/common.sh@31 -- # xtrace_disable 00:14:08.382 15:39:29 -- common/autotest_common.sh@10 -- # set +x 00:14:08.382 { 00:14:08.382 "subsystems": [ 00:14:08.382 { 00:14:08.382 "subsystem": "bdev", 00:14:08.382 "config": [ 00:14:08.382 { 00:14:08.382 "params": { 00:14:08.382 "io_mechanism": "io_uring", 00:14:08.382 "filename": "/dev/nullb0", 00:14:08.382 "name": "null0" 00:14:08.382 }, 00:14:08.382 "method": "bdev_xnvme_create" 00:14:08.382 }, 00:14:08.382 { 00:14:08.382 "method": "bdev_wait_for_examine" 00:14:08.382 } 00:14:08.382 ] 00:14:08.382 } 00:14:08.382 ] 00:14:08.382 } 00:14:08.382 [2024-07-24 15:39:29.905858] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:08.382 [2024-07-24 15:39:29.906046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68033 ] 00:14:08.641 [2024-07-24 15:39:30.076860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.899 [2024-07-24 15:39:30.281489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.157 Running I/O for 5 seconds... 00:14:14.472 00:14:14.472 Latency(us) 00:14:14.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.472 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:14.472 null0 : 5.00 153703.32 600.40 0.00 0.00 413.23 260.65 606.95 00:14:14.472 =================================================================================================================== 00:14:14.472 Total : 153703.32 600.40 0.00 0.00 413.23 260.65 606.95 00:14:15.406 15:39:36 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:15.406 15:39:36 -- dd/common.sh@195 -- # modprobe -r null_blk 00:14:15.406 00:14:15.406 real 0m13.747s 00:14:15.406 user 0m10.569s 00:14:15.406 sys 0m2.945s 00:14:15.406 15:39:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.406 ************************************ 00:14:15.406 15:39:36 -- common/autotest_common.sh@10 -- # set +x 00:14:15.406 END TEST xnvme_bdevperf 00:14:15.406 ************************************ 00:14:15.406 00:14:15.406 real 0m57.145s 00:14:15.406 user 0m48.503s 00:14:15.406 sys 0m7.797s 00:14:15.406 15:39:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.406 15:39:36 -- common/autotest_common.sh@10 -- # set +x 00:14:15.406 ************************************ 00:14:15.406 END TEST nvme_xnvme 00:14:15.406 ************************************ 00:14:15.406 15:39:36 -- spdk/autotest.sh@257 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:15.406 15:39:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:15.406 15:39:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:15.406 15:39:36 -- common/autotest_common.sh@10 -- # set +x 00:14:15.406 ************************************ 00:14:15.406 START TEST blockdev_xnvme 00:14:15.406 ************************************ 00:14:15.406 15:39:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:15.406 * Looking for test storage... 00:14:15.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:15.406 15:39:36 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:15.406 15:39:36 -- bdev/nbd_common.sh@6 -- # set -e 00:14:15.406 15:39:36 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:15.406 15:39:36 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:15.406 15:39:36 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:15.406 15:39:36 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:15.406 15:39:36 -- bdev/blockdev.sh@18 -- # : 00:14:15.406 15:39:36 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:14:15.406 15:39:36 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:14:15.406 15:39:36 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:14:15.406 15:39:36 -- bdev/blockdev.sh@672 -- # uname -s 00:14:15.406 15:39:36 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:14:15.406 15:39:36 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:14:15.406 15:39:36 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:14:15.406 15:39:36 -- bdev/blockdev.sh@681 -- # crypto_device= 00:14:15.406 15:39:36 -- bdev/blockdev.sh@682 -- # dek= 00:14:15.406 15:39:36 -- bdev/blockdev.sh@683 -- # env_ctx= 00:14:15.406 15:39:36 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:14:15.406 15:39:36 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:14:15.406 15:39:36 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:14:15.406 15:39:36 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:14:15.406 15:39:36 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:14:15.406 15:39:36 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=68178 00:14:15.406 15:39:36 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:15.406 15:39:36 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:15.406 15:39:36 -- bdev/blockdev.sh@47 -- # waitforlisten 68178 00:14:15.406 15:39:36 -- common/autotest_common.sh@819 -- # '[' -z 68178 ']' 00:14:15.406 15:39:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.406 15:39:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:15.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.406 15:39:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.406 15:39:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:15.406 15:39:36 -- common/autotest_common.sh@10 -- # set +x 00:14:15.406 [2024-07-24 15:39:36.945653] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:15.406 [2024-07-24 15:39:36.945799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68178 ] 00:14:15.664 [2024-07-24 15:39:37.108763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.922 [2024-07-24 15:39:37.333182] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:15.922 [2024-07-24 15:39:37.333452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.298 15:39:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:17.298 15:39:38 -- common/autotest_common.sh@852 -- # return 0 00:14:17.298 15:39:38 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:14:17.298 15:39:38 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:14:17.298 15:39:38 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:14:17.298 15:39:38 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:14:17.298 15:39:38 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:17.557 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:17.557 Waiting for block devices as requested 00:14:17.816 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:14:17.816 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:14:17.816 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:14:18.074 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:14:23.352 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:14:23.352 15:39:44 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:14:23.352 15:39:44 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:14:23.352 15:39:44 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:14:23.352 15:39:44 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:14:23.352 15:39:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:23.352 15:39:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:14:23.352 15:39:44 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:14:23.352 15:39:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:14:23.352 15:39:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:23.352 15:39:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:23.352 15:39:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:14:23.352 15:39:44 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:14:23.352 15:39:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:23.352 15:39:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:23.352 15:39:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:23.352 15:39:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:14:23.352 15:39:44 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:14:23.352 15:39:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:23.352 15:39:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:23.352 15:39:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:23.352 15:39:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:14:23.352 15:39:44 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:14:23.352 15:39:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:14:23.352 15:39:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:23.352 15:39:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:23.352 15:39:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:14:23.352 15:39:44 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:14:23.352 15:39:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:14:23.352 15:39:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:23.352 15:39:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:23.352 15:39:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:14:23.352 15:39:44 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:14:23.352 15:39:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:23.352 15:39:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:23.352 15:39:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:23.352 15:39:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:14:23.352 15:39:44 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:14:23.352 15:39:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:23.352 15:39:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:23.352 15:39:44 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:14:23.352 15:39:44 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:14:23.352 15:39:44 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:14:23.352 15:39:44 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:23.352 15:39:44 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:14:23.352 15:39:44 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:14:23.352 15:39:44 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:14:23.352 15:39:44 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:23.352 15:39:44 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:14:23.352 15:39:44 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:14:23.352 15:39:44 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:14:23.352 15:39:44 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:23.352 15:39:44 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:14:23.352 15:39:44 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:14:23.352 15:39:44 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:14:23.352 15:39:44 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:23.352 15:39:44 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:14:23.352 15:39:44 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:14:23.352 15:39:44 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:14:23.352 15:39:44 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:23.352 15:39:44 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:14:23.352 15:39:44 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:14:23.352 15:39:44 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:14:23.352 15:39:44 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:23.352 15:39:44 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:14:23.352 15:39:44 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:14:23.352 15:39:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.352 15:39:44 -- common/autotest_common.sh@10 -- # set +x 00:14:23.352 15:39:44 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:23.352 nvme0n1 00:14:23.352 nvme1n1 00:14:23.352 nvme1n2 00:14:23.352 nvme1n3 00:14:23.352 nvme2n1 00:14:23.352 nvme3n1 00:14:23.352 15:39:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.352 15:39:44 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:14:23.352 15:39:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.352 15:39:44 -- common/autotest_common.sh@10 -- # set +x 00:14:23.352 15:39:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.352 15:39:44 -- bdev/blockdev.sh@738 -- # cat 00:14:23.352 15:39:44 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:14:23.352 15:39:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.352 15:39:44 -- common/autotest_common.sh@10 -- # set +x 00:14:23.352 15:39:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.352 15:39:44 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:14:23.352 15:39:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.352 15:39:44 -- common/autotest_common.sh@10 -- # set +x 00:14:23.352 15:39:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.352 15:39:44 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:23.352 15:39:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.352 15:39:44 -- common/autotest_common.sh@10 -- # set +x 00:14:23.352 15:39:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.352 15:39:44 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:14:23.352 15:39:44 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:14:23.352 15:39:44 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:14:23.352 15:39:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.352 15:39:44 -- common/autotest_common.sh@10 -- # set +x 00:14:23.352 15:39:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.352 15:39:44 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:14:23.352 15:39:44 -- bdev/blockdev.sh@747 -- # jq -r .name 00:14:23.353 15:39:44 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2eadbef8-a3d3-4d9f-8ca4-e0e2587e6c29"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2eadbef8-a3d3-4d9f-8ca4-e0e2587e6c29",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "33869ad8-b8e8-4337-820e-b553614d2f7f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "33869ad8-b8e8-4337-820e-b553614d2f7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "8ae33ee1-6a61-4fdb-9f88-8fe818b01f6a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8ae33ee1-6a61-4fdb-9f88-8fe818b01f6a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "d973ea2e-b5fc-43fe-b4a5-a25c431f9fb6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d973ea2e-b5fc-43fe-b4a5-a25c431f9fb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a92a309c-a08a-4ba8-8ecb-1c1342475183"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a92a309c-a08a-4ba8-8ecb-1c1342475183",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "691a2e5a-fb11-47c6-8ed5-d79d2649807a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "691a2e5a-fb11-47c6-8ed5-d79d2649807a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:14:23.353 15:39:44 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:14:23.353 15:39:44 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:14:23.353 15:39:44 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:14:23.353 15:39:44 -- bdev/blockdev.sh@752 -- # killprocess 68178 00:14:23.353 15:39:44 -- common/autotest_common.sh@926 -- # '[' -z 68178 ']' 00:14:23.353 15:39:44 -- common/autotest_common.sh@930 -- # kill -0 68178 00:14:23.353 15:39:44 -- common/autotest_common.sh@931 -- # uname 00:14:23.353 15:39:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:23.353 15:39:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68178 00:14:23.353 15:39:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:23.353 15:39:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:23.353 killing process with pid 68178 00:14:23.353 15:39:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68178' 00:14:23.353 15:39:44 -- common/autotest_common.sh@945 -- # kill 68178 00:14:23.353 15:39:44 -- common/autotest_common.sh@950 -- # wait 68178 00:14:25.882 15:39:46 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:25.882 15:39:46 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:25.882 15:39:46 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:14:25.882 15:39:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:25.882 15:39:46 -- common/autotest_common.sh@10 -- # set +x 00:14:25.882 ************************************ 00:14:25.882 START TEST bdev_hello_world 00:14:25.882 ************************************ 00:14:25.882 15:39:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:25.882 [2024-07-24 15:39:47.024586] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:25.882 [2024-07-24 15:39:47.024733] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68574 ] 00:14:25.882 [2024-07-24 15:39:47.188501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.882 [2024-07-24 15:39:47.369458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.448 [2024-07-24 15:39:47.788593] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:26.448 [2024-07-24 15:39:47.788656] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:26.448 [2024-07-24 15:39:47.788679] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:26.448 [2024-07-24 15:39:47.790801] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:26.448 [2024-07-24 15:39:47.791174] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:26.448 [2024-07-24 15:39:47.791212] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:26.448 [2024-07-24 15:39:47.791475] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:26.448 00:14:26.448 [2024-07-24 15:39:47.791520] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:27.382 00:14:27.382 real 0m1.903s 00:14:27.382 user 0m1.588s 00:14:27.382 sys 0m0.199s 00:14:27.382 15:39:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.382 15:39:48 -- common/autotest_common.sh@10 -- # set +x 00:14:27.382 ************************************ 00:14:27.382 END TEST bdev_hello_world 00:14:27.382 ************************************ 00:14:27.382 15:39:48 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:14:27.382 15:39:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:27.382 15:39:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:27.382 15:39:48 -- common/autotest_common.sh@10 -- # set +x 00:14:27.382 ************************************ 00:14:27.382 START TEST bdev_bounds 00:14:27.382 ************************************ 00:14:27.382 15:39:48 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:14:27.382 15:39:48 -- bdev/blockdev.sh@288 -- # bdevio_pid=68616 00:14:27.382 15:39:48 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:27.382 15:39:48 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:27.382 Process bdevio pid: 68616 00:14:27.382 15:39:48 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 68616' 00:14:27.382 15:39:48 -- bdev/blockdev.sh@291 -- # waitforlisten 68616 00:14:27.382 15:39:48 -- common/autotest_common.sh@819 -- # '[' -z 68616 ']' 00:14:27.382 15:39:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.382 15:39:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:27.382 15:39:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.382 15:39:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:27.382 15:39:48 -- common/autotest_common.sh@10 -- # set +x 00:14:27.640 [2024-07-24 15:39:49.005147] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:27.640 [2024-07-24 15:39:49.005311] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68616 ] 00:14:27.640 [2024-07-24 15:39:49.174399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:27.898 [2024-07-24 15:39:49.366926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.898 [2024-07-24 15:39:49.367050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.898 [2024-07-24 15:39:49.367068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.466 15:39:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:28.466 15:39:49 -- common/autotest_common.sh@852 -- # return 0 00:14:28.466 15:39:49 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:28.466 I/O targets: 00:14:28.466 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:28.466 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:28.466 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:28.466 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:28.466 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:28.466 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:28.466 00:14:28.466 00:14:28.466 CUnit - A unit testing framework for C - Version 2.1-3 00:14:28.466 http://cunit.sourceforge.net/ 00:14:28.466 00:14:28.466 00:14:28.466 Suite: bdevio tests on: nvme3n1 00:14:28.466 Test: blockdev write read block ...passed 00:14:28.466 Test: blockdev write zeroes read block ...passed 00:14:28.466 Test: blockdev write zeroes read no split ...passed 00:14:28.466 Test: blockdev write zeroes read split ...passed 00:14:28.466 Test: blockdev write zeroes read split partial ...passed 00:14:28.466 Test: blockdev reset ...passed 00:14:28.466 Test: blockdev write read 8 blocks ...passed 00:14:28.466 Test: blockdev write read size > 128k ...passed 00:14:28.466 Test: blockdev write read invalid size ...passed 00:14:28.466 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:28.466 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:28.466 Test: blockdev write read max offset ...passed 00:14:28.466 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:28.466 Test: blockdev writev readv 8 blocks ...passed 00:14:28.725 Test: blockdev writev readv 30 x 1block ...passed 00:14:28.725 Test: blockdev writev readv block ...passed 00:14:28.725 Test: blockdev writev readv size > 128k ...passed 00:14:28.725 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:28.725 Test: blockdev comparev and writev ...passed 00:14:28.725 Test: blockdev nvme passthru rw ...passed 00:14:28.725 Test: blockdev nvme passthru vendor specific ...passed 00:14:28.725 Test: blockdev nvme admin passthru ...passed 00:14:28.725 Test: blockdev copy ...passed 00:14:28.725 Suite: bdevio tests on: nvme2n1 00:14:28.725 Test: blockdev write read block ...passed 00:14:28.725 Test: blockdev write zeroes read block ...passed 00:14:28.725 Test: blockdev write zeroes read no split ...passed 00:14:28.725 Test: blockdev write zeroes read split ...passed 00:14:28.725 Test: blockdev write zeroes read split partial ...passed 00:14:28.725 Test: blockdev reset ...passed 00:14:28.725 Test: blockdev write read 8 blocks ...passed 00:14:28.725 Test: blockdev write read size > 128k ...passed 00:14:28.725 Test: blockdev write read invalid size ...passed 00:14:28.725 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:28.725 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:28.725 Test: blockdev write read max offset ...passed 00:14:28.725 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:28.725 Test: blockdev writev readv 8 blocks ...passed 00:14:28.725 Test: blockdev writev readv 30 x 1block ...passed 00:14:28.725 Test: blockdev writev readv block ...passed 00:14:28.725 Test: blockdev writev readv size > 128k ...passed 00:14:28.725 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:28.725 Test: blockdev comparev and writev ...passed 00:14:28.725 Test: blockdev nvme passthru rw ...passed 00:14:28.725 Test: blockdev nvme passthru vendor specific ...passed 00:14:28.725 Test: blockdev nvme admin passthru ...passed 00:14:28.725 Test: blockdev copy ...passed 00:14:28.725 Suite: bdevio tests on: nvme1n3 00:14:28.725 Test: blockdev write read block ...passed 00:14:28.725 Test: blockdev write zeroes read block ...passed 00:14:28.725 Test: blockdev write zeroes read no split ...passed 00:14:28.725 Test: blockdev write zeroes read split ...passed 00:14:28.725 Test: blockdev write zeroes read split partial ...passed 00:14:28.725 Test: blockdev reset ...passed 00:14:28.725 Test: blockdev write read 8 blocks ...passed 00:14:28.725 Test: blockdev write read size > 128k ...passed 00:14:28.725 Test: blockdev write read invalid size ...passed 00:14:28.725 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:28.725 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:28.725 Test: blockdev write read max offset ...passed 00:14:28.725 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:28.725 Test: blockdev writev readv 8 blocks ...passed 00:14:28.725 Test: blockdev writev readv 30 x 1block ...passed 00:14:28.725 Test: blockdev writev readv block ...passed 00:14:28.725 Test: blockdev writev readv size > 128k ...passed 00:14:28.725 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:28.725 Test: blockdev comparev and writev ...passed 00:14:28.725 Test: blockdev nvme passthru rw ...passed 00:14:28.725 Test: blockdev nvme passthru vendor specific ...passed 00:14:28.725 Test: blockdev nvme admin passthru ...passed 00:14:28.725 Test: blockdev copy ...passed 00:14:28.725 Suite: bdevio tests on: nvme1n2 00:14:28.725 Test: blockdev write read block ...passed 00:14:28.725 Test: blockdev write zeroes read block ...passed 00:14:28.725 Test: blockdev write zeroes read no split ...passed 00:14:28.725 Test: blockdev write zeroes read split ...passed 00:14:28.725 Test: blockdev write zeroes read split partial ...passed 00:14:28.725 Test: blockdev reset ...passed 00:14:28.725 Test: blockdev write read 8 blocks ...passed 00:14:28.725 Test: blockdev write read size > 128k ...passed 00:14:28.725 Test: blockdev write read invalid size ...passed 00:14:28.725 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:28.725 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:28.725 Test: blockdev write read max offset ...passed 00:14:28.725 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:28.725 Test: blockdev writev readv 8 blocks ...passed 00:14:28.725 Test: blockdev writev readv 30 x 1block ...passed 00:14:28.725 Test: blockdev writev readv block ...passed 00:14:28.725 Test: blockdev writev readv size > 128k ...passed 00:14:28.725 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:28.725 Test: blockdev comparev and writev ...passed 00:14:28.725 Test: blockdev nvme passthru rw ...passed 00:14:28.725 Test: blockdev nvme passthru vendor specific ...passed 00:14:28.725 Test: blockdev nvme admin passthru ...passed 00:14:28.725 Test: blockdev copy ...passed 00:14:28.725 Suite: bdevio tests on: nvme1n1 00:14:28.725 Test: blockdev write read block ...passed 00:14:28.725 Test: blockdev write zeroes read block ...passed 00:14:28.725 Test: blockdev write zeroes read no split ...passed 00:14:28.725 Test: blockdev write zeroes read split ...passed 00:14:28.985 Test: blockdev write zeroes read split partial ...passed 00:14:28.985 Test: blockdev reset ...passed 00:14:28.985 Test: blockdev write read 8 blocks ...passed 00:14:28.985 Test: blockdev write read size > 128k ...passed 00:14:28.985 Test: blockdev write read invalid size ...passed 00:14:28.985 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:28.985 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:28.985 Test: blockdev write read max offset ...passed 00:14:28.985 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:28.985 Test: blockdev writev readv 8 blocks ...passed 00:14:28.985 Test: blockdev writev readv 30 x 1block ...passed 00:14:28.985 Test: blockdev writev readv block ...passed 00:14:28.985 Test: blockdev writev readv size > 128k ...passed 00:14:28.985 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:28.985 Test: blockdev comparev and writev ...passed 00:14:28.985 Test: blockdev nvme passthru rw ...passed 00:14:28.985 Test: blockdev nvme passthru vendor specific ...passed 00:14:28.985 Test: blockdev nvme admin passthru ...passed 00:14:28.985 Test: blockdev copy ...passed 00:14:28.985 Suite: bdevio tests on: nvme0n1 00:14:28.985 Test: blockdev write read block ...passed 00:14:28.985 Test: blockdev write zeroes read block ...passed 00:14:28.985 Test: blockdev write zeroes read no split ...passed 00:14:28.985 Test: blockdev write zeroes read split ...passed 00:14:28.985 Test: blockdev write zeroes read split partial ...passed 00:14:28.985 Test: blockdev reset ...passed 00:14:28.985 Test: blockdev write read 8 blocks ...passed 00:14:28.985 Test: blockdev write read size > 128k ...passed 00:14:28.985 Test: blockdev write read invalid size ...passed 00:14:28.985 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:28.985 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:28.985 Test: blockdev write read max offset ...passed 00:14:28.985 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:28.985 Test: blockdev writev readv 8 blocks ...passed 00:14:28.985 Test: blockdev writev readv 30 x 1block ...passed 00:14:28.985 Test: blockdev writev readv block ...passed 00:14:28.985 Test: blockdev writev readv size > 128k ...passed 00:14:28.985 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:28.985 Test: blockdev comparev and writev ...passed 00:14:28.985 Test: blockdev nvme passthru rw ...passed 00:14:28.985 Test: blockdev nvme passthru vendor specific ...passed 00:14:28.985 Test: blockdev nvme admin passthru ...passed 00:14:28.985 Test: blockdev copy ...passed 00:14:28.985 00:14:28.985 Run Summary: Type Total Ran Passed Failed Inactive 00:14:28.985 suites 6 6 n/a 0 0 00:14:28.985 tests 138 138 138 0 0 00:14:28.985 asserts 780 780 780 0 n/a 00:14:28.985 00:14:28.985 Elapsed time = 1.114 seconds 00:14:28.985 0 00:14:28.985 15:39:50 -- bdev/blockdev.sh@293 -- # killprocess 68616 00:14:28.985 15:39:50 -- common/autotest_common.sh@926 -- # '[' -z 68616 ']' 00:14:28.985 15:39:50 -- common/autotest_common.sh@930 -- # kill -0 68616 00:14:28.985 15:39:50 -- common/autotest_common.sh@931 -- # uname 00:14:28.985 15:39:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:28.985 15:39:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68616 00:14:28.985 15:39:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:28.985 15:39:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:28.985 killing process with pid 68616 00:14:28.985 15:39:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68616' 00:14:28.985 15:39:50 -- common/autotest_common.sh@945 -- # kill 68616 00:14:28.985 15:39:50 -- common/autotest_common.sh@950 -- # wait 68616 00:14:30.361 15:39:51 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:14:30.361 00:14:30.361 real 0m2.633s 00:14:30.361 user 0m6.238s 00:14:30.361 sys 0m0.317s 00:14:30.361 15:39:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:30.361 15:39:51 -- common/autotest_common.sh@10 -- # set +x 00:14:30.361 ************************************ 00:14:30.361 END TEST bdev_bounds 00:14:30.361 ************************************ 00:14:30.361 15:39:51 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:14:30.361 15:39:51 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:30.361 15:39:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:30.361 15:39:51 -- common/autotest_common.sh@10 -- # set +x 00:14:30.361 ************************************ 00:14:30.361 START TEST bdev_nbd 00:14:30.361 ************************************ 00:14:30.361 15:39:51 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:14:30.361 15:39:51 -- bdev/blockdev.sh@298 -- # uname -s 00:14:30.361 15:39:51 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:14:30.361 15:39:51 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:30.361 15:39:51 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:30.361 15:39:51 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:30.361 15:39:51 -- bdev/blockdev.sh@302 -- # local bdev_all 00:14:30.361 15:39:51 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:14:30.361 15:39:51 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:14:30.361 15:39:51 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:30.361 15:39:51 -- bdev/blockdev.sh@309 -- # local nbd_all 00:14:30.361 15:39:51 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:14:30.361 15:39:51 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:30.361 15:39:51 -- bdev/blockdev.sh@312 -- # local nbd_list 00:14:30.361 15:39:51 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:30.361 15:39:51 -- bdev/blockdev.sh@313 -- # local bdev_list 00:14:30.361 15:39:51 -- bdev/blockdev.sh@316 -- # nbd_pid=68681 00:14:30.361 15:39:51 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:30.361 15:39:51 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:30.361 15:39:51 -- bdev/blockdev.sh@318 -- # waitforlisten 68681 /var/tmp/spdk-nbd.sock 00:14:30.361 15:39:51 -- common/autotest_common.sh@819 -- # '[' -z 68681 ']' 00:14:30.361 15:39:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:30.361 15:39:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:30.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:30.361 15:39:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:30.361 15:39:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:30.361 15:39:51 -- common/autotest_common.sh@10 -- # set +x 00:14:30.361 [2024-07-24 15:39:51.661630] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:30.361 [2024-07-24 15:39:51.662217] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.361 [2024-07-24 15:39:51.828255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.620 [2024-07-24 15:39:52.053047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.186 15:39:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:31.186 15:39:52 -- common/autotest_common.sh@852 -- # return 0 00:14:31.186 15:39:52 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:14:31.186 15:39:52 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:31.186 15:39:52 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:31.186 15:39:52 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:31.186 15:39:52 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:14:31.186 15:39:52 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:31.186 15:39:52 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:31.186 15:39:52 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:31.186 15:39:52 -- bdev/nbd_common.sh@24 -- # local i 00:14:31.186 15:39:52 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:31.186 15:39:52 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:31.186 15:39:52 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:31.186 15:39:52 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:31.444 15:39:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:31.444 15:39:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:31.444 15:39:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:31.444 15:39:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:31.444 15:39:52 -- common/autotest_common.sh@857 -- # local i 00:14:31.444 15:39:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:31.444 15:39:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:31.444 15:39:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:31.444 15:39:52 -- common/autotest_common.sh@861 -- # break 00:14:31.444 15:39:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:31.444 15:39:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:31.444 15:39:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.444 1+0 records in 00:14:31.444 1+0 records out 00:14:31.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054515 s, 7.5 MB/s 00:14:31.444 15:39:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.444 15:39:52 -- common/autotest_common.sh@874 -- # size=4096 00:14:31.444 15:39:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.444 15:39:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:31.444 15:39:52 -- common/autotest_common.sh@877 -- # return 0 00:14:31.444 15:39:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:31.444 15:39:52 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:31.444 15:39:52 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:31.724 15:39:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:31.724 15:39:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:31.724 15:39:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:31.724 15:39:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:14:31.725 15:39:53 -- common/autotest_common.sh@857 -- # local i 00:14:31.725 15:39:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:31.725 15:39:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:31.725 15:39:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:14:31.725 15:39:53 -- common/autotest_common.sh@861 -- # break 00:14:31.725 15:39:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:31.725 15:39:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:31.725 15:39:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.725 1+0 records in 00:14:31.725 1+0 records out 00:14:31.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502197 s, 8.2 MB/s 00:14:31.725 15:39:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.725 15:39:53 -- common/autotest_common.sh@874 -- # size=4096 00:14:31.725 15:39:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.725 15:39:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:31.725 15:39:53 -- common/autotest_common.sh@877 -- # return 0 00:14:31.725 15:39:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:31.725 15:39:53 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:31.725 15:39:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:14:31.984 15:39:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:31.984 15:39:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:31.984 15:39:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:31.984 15:39:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:14:31.984 15:39:53 -- common/autotest_common.sh@857 -- # local i 00:14:31.984 15:39:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:31.984 15:39:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:31.984 15:39:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:14:31.984 15:39:53 -- common/autotest_common.sh@861 -- # break 00:14:31.984 15:39:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:31.984 15:39:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:31.984 15:39:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.984 1+0 records in 00:14:31.984 1+0 records out 00:14:31.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605101 s, 6.8 MB/s 00:14:31.984 15:39:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.984 15:39:53 -- common/autotest_common.sh@874 -- # size=4096 00:14:31.984 15:39:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.984 15:39:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:31.984 15:39:53 -- common/autotest_common.sh@877 -- # return 0 00:14:31.984 15:39:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:31.984 15:39:53 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:31.984 15:39:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:14:32.242 15:39:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:32.242 15:39:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:32.242 15:39:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:32.242 15:39:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:14:32.242 15:39:53 -- common/autotest_common.sh@857 -- # local i 00:14:32.242 15:39:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:32.242 15:39:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:32.242 15:39:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:14:32.242 15:39:53 -- common/autotest_common.sh@861 -- # break 00:14:32.242 15:39:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:32.242 15:39:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:32.242 15:39:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:32.242 1+0 records in 00:14:32.242 1+0 records out 00:14:32.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577584 s, 7.1 MB/s 00:14:32.242 15:39:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.242 15:39:53 -- common/autotest_common.sh@874 -- # size=4096 00:14:32.242 15:39:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.242 15:39:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:32.242 15:39:53 -- common/autotest_common.sh@877 -- # return 0 00:14:32.242 15:39:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:32.242 15:39:53 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:32.242 15:39:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:32.500 15:39:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:32.500 15:39:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:32.500 15:39:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:32.500 15:39:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:14:32.500 15:39:54 -- common/autotest_common.sh@857 -- # local i 00:14:32.500 15:39:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:32.500 15:39:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:32.501 15:39:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:14:32.501 15:39:54 -- common/autotest_common.sh@861 -- # break 00:14:32.501 15:39:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:32.501 15:39:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:32.501 15:39:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:32.501 1+0 records in 00:14:32.501 1+0 records out 00:14:32.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570105 s, 7.2 MB/s 00:14:32.501 15:39:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.501 15:39:54 -- common/autotest_common.sh@874 -- # size=4096 00:14:32.501 15:39:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.501 15:39:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:32.501 15:39:54 -- common/autotest_common.sh@877 -- # return 0 00:14:32.501 15:39:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:32.501 15:39:54 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:32.501 15:39:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:33.065 15:39:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:33.065 15:39:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:33.065 15:39:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:33.065 15:39:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:14:33.065 15:39:54 -- common/autotest_common.sh@857 -- # local i 00:14:33.065 15:39:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:33.065 15:39:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:33.065 15:39:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:14:33.065 15:39:54 -- common/autotest_common.sh@861 -- # break 00:14:33.065 15:39:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:33.065 15:39:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:33.065 15:39:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.065 1+0 records in 00:14:33.065 1+0 records out 00:14:33.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000813387 s, 5.0 MB/s 00:14:33.065 15:39:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.065 15:39:54 -- common/autotest_common.sh@874 -- # size=4096 00:14:33.065 15:39:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.065 15:39:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:33.065 15:39:54 -- common/autotest_common.sh@877 -- # return 0 00:14:33.065 15:39:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:33.065 15:39:54 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:33.065 15:39:54 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:33.065 15:39:54 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:33.065 { 00:14:33.065 "nbd_device": "/dev/nbd0", 00:14:33.065 "bdev_name": "nvme0n1" 00:14:33.065 }, 00:14:33.065 { 00:14:33.065 "nbd_device": "/dev/nbd1", 00:14:33.065 "bdev_name": "nvme1n1" 00:14:33.065 }, 00:14:33.065 { 00:14:33.065 "nbd_device": "/dev/nbd2", 00:14:33.065 "bdev_name": "nvme1n2" 00:14:33.065 }, 00:14:33.065 { 00:14:33.065 "nbd_device": "/dev/nbd3", 00:14:33.065 "bdev_name": "nvme1n3" 00:14:33.065 }, 00:14:33.065 { 00:14:33.065 "nbd_device": "/dev/nbd4", 00:14:33.065 "bdev_name": "nvme2n1" 00:14:33.065 }, 00:14:33.065 { 00:14:33.065 "nbd_device": "/dev/nbd5", 00:14:33.065 "bdev_name": "nvme3n1" 00:14:33.065 } 00:14:33.065 ]' 00:14:33.065 15:39:54 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:33.065 15:39:54 -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:33.065 { 00:14:33.065 "nbd_device": "/dev/nbd0", 00:14:33.065 "bdev_name": "nvme0n1" 00:14:33.065 }, 00:14:33.065 { 00:14:33.065 "nbd_device": "/dev/nbd1", 00:14:33.065 "bdev_name": "nvme1n1" 00:14:33.065 }, 00:14:33.065 { 00:14:33.065 "nbd_device": "/dev/nbd2", 00:14:33.065 "bdev_name": "nvme1n2" 00:14:33.065 }, 00:14:33.065 { 00:14:33.065 "nbd_device": "/dev/nbd3", 00:14:33.065 "bdev_name": "nvme1n3" 00:14:33.065 }, 00:14:33.065 { 00:14:33.065 "nbd_device": "/dev/nbd4", 00:14:33.065 "bdev_name": "nvme2n1" 00:14:33.065 }, 00:14:33.065 { 00:14:33.065 "nbd_device": "/dev/nbd5", 00:14:33.065 "bdev_name": "nvme3n1" 00:14:33.065 } 00:14:33.065 ]' 00:14:33.065 15:39:54 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:33.323 15:39:54 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:33.323 15:39:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:33.323 15:39:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:33.323 15:39:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:33.323 15:39:54 -- bdev/nbd_common.sh@51 -- # local i 00:14:33.323 15:39:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.323 15:39:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:33.581 15:39:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:33.581 15:39:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:33.581 15:39:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:33.581 15:39:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:33.581 15:39:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:33.581 15:39:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:33.581 15:39:55 -- bdev/nbd_common.sh@41 -- # break 00:14:33.581 15:39:55 -- bdev/nbd_common.sh@45 -- # return 0 00:14:33.581 15:39:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.581 15:39:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:33.839 15:39:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:33.839 15:39:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:33.839 15:39:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:33.839 15:39:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:33.839 15:39:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:33.839 15:39:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:33.839 15:39:55 -- bdev/nbd_common.sh@41 -- # break 00:14:33.839 15:39:55 -- bdev/nbd_common.sh@45 -- # return 0 00:14:33.839 15:39:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.839 15:39:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:34.097 15:39:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:34.097 15:39:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:34.097 15:39:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:34.097 15:39:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:34.097 15:39:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:34.097 15:39:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:34.097 15:39:55 -- bdev/nbd_common.sh@41 -- # break 00:14:34.097 15:39:55 -- bdev/nbd_common.sh@45 -- # return 0 00:14:34.097 15:39:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:34.097 15:39:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:34.355 15:39:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:34.355 15:39:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:34.355 15:39:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:34.355 15:39:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:34.355 15:39:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:34.355 15:39:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:34.355 15:39:55 -- bdev/nbd_common.sh@41 -- # break 00:14:34.355 15:39:55 -- bdev/nbd_common.sh@45 -- # return 0 00:14:34.355 15:39:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:34.355 15:39:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:34.613 15:39:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:34.613 15:39:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:34.613 15:39:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:34.613 15:39:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:34.613 15:39:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:34.613 15:39:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:34.613 15:39:56 -- bdev/nbd_common.sh@41 -- # break 00:14:34.613 15:39:56 -- bdev/nbd_common.sh@45 -- # return 0 00:14:34.613 15:39:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:34.613 15:39:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:34.870 15:39:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:34.871 15:39:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:34.871 15:39:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:34.871 15:39:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:34.871 15:39:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:34.871 15:39:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:34.871 15:39:56 -- bdev/nbd_common.sh@41 -- # break 00:14:34.871 15:39:56 -- bdev/nbd_common.sh@45 -- # return 0 00:14:34.871 15:39:56 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:34.871 15:39:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:34.871 15:39:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@65 -- # true 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@65 -- # count=0 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@122 -- # count=0 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@127 -- # return 0 00:14:35.129 15:39:56 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@12 -- # local i 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:35.129 15:39:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:35.388 /dev/nbd0 00:14:35.388 15:39:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:35.388 15:39:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:35.388 15:39:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:35.388 15:39:56 -- common/autotest_common.sh@857 -- # local i 00:14:35.388 15:39:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:35.388 15:39:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:35.388 15:39:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:35.388 15:39:56 -- common/autotest_common.sh@861 -- # break 00:14:35.388 15:39:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:35.388 15:39:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:35.388 15:39:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:35.388 1+0 records in 00:14:35.388 1+0 records out 00:14:35.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395376 s, 10.4 MB/s 00:14:35.388 15:39:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.388 15:39:56 -- common/autotest_common.sh@874 -- # size=4096 00:14:35.388 15:39:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.388 15:39:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:35.388 15:39:56 -- common/autotest_common.sh@877 -- # return 0 00:14:35.388 15:39:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.388 15:39:56 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:35.388 15:39:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:14:35.646 /dev/nbd1 00:14:35.646 15:39:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:35.646 15:39:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:35.646 15:39:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:14:35.646 15:39:57 -- common/autotest_common.sh@857 -- # local i 00:14:35.646 15:39:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:35.646 15:39:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:35.646 15:39:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:14:35.646 15:39:57 -- common/autotest_common.sh@861 -- # break 00:14:35.646 15:39:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:35.646 15:39:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:35.646 15:39:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:35.646 1+0 records in 00:14:35.646 1+0 records out 00:14:35.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418831 s, 9.8 MB/s 00:14:35.646 15:39:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.646 15:39:57 -- common/autotest_common.sh@874 -- # size=4096 00:14:35.646 15:39:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.646 15:39:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:35.646 15:39:57 -- common/autotest_common.sh@877 -- # return 0 00:14:35.646 15:39:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.646 15:39:57 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:35.646 15:39:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:14:36.213 /dev/nbd10 00:14:36.213 15:39:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:36.213 15:39:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:36.213 15:39:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:14:36.213 15:39:57 -- common/autotest_common.sh@857 -- # local i 00:14:36.213 15:39:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:36.213 15:39:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:36.213 15:39:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:14:36.213 15:39:57 -- common/autotest_common.sh@861 -- # break 00:14:36.213 15:39:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:36.213 15:39:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:36.213 15:39:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.213 1+0 records in 00:14:36.213 1+0 records out 00:14:36.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530306 s, 7.7 MB/s 00:14:36.213 15:39:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.213 15:39:57 -- common/autotest_common.sh@874 -- # size=4096 00:14:36.213 15:39:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.213 15:39:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:36.213 15:39:57 -- common/autotest_common.sh@877 -- # return 0 00:14:36.213 15:39:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.213 15:39:57 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:36.213 15:39:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:14:36.213 /dev/nbd11 00:14:36.213 15:39:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:36.213 15:39:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:36.213 15:39:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:14:36.213 15:39:57 -- common/autotest_common.sh@857 -- # local i 00:14:36.213 15:39:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:36.213 15:39:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:36.213 15:39:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:14:36.213 15:39:57 -- common/autotest_common.sh@861 -- # break 00:14:36.213 15:39:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:36.213 15:39:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:36.213 15:39:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.213 1+0 records in 00:14:36.213 1+0 records out 00:14:36.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425624 s, 9.6 MB/s 00:14:36.213 15:39:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.471 15:39:57 -- common/autotest_common.sh@874 -- # size=4096 00:14:36.471 15:39:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.471 15:39:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:36.471 15:39:57 -- common/autotest_common.sh@877 -- # return 0 00:14:36.471 15:39:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.471 15:39:57 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:36.471 15:39:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:14:36.471 /dev/nbd12 00:14:36.730 15:39:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:36.730 15:39:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:36.730 15:39:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:14:36.730 15:39:58 -- common/autotest_common.sh@857 -- # local i 00:14:36.730 15:39:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:36.730 15:39:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:36.730 15:39:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:14:36.730 15:39:58 -- common/autotest_common.sh@861 -- # break 00:14:36.730 15:39:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:36.730 15:39:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:36.730 15:39:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.730 1+0 records in 00:14:36.730 1+0 records out 00:14:36.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000860094 s, 4.8 MB/s 00:14:36.730 15:39:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.730 15:39:58 -- common/autotest_common.sh@874 -- # size=4096 00:14:36.730 15:39:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.730 15:39:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:36.730 15:39:58 -- common/autotest_common.sh@877 -- # return 0 00:14:36.730 15:39:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.730 15:39:58 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:36.730 15:39:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:36.730 /dev/nbd13 00:14:36.730 15:39:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:36.988 15:39:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:36.988 15:39:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:14:36.988 15:39:58 -- common/autotest_common.sh@857 -- # local i 00:14:36.988 15:39:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:36.988 15:39:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:36.988 15:39:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:14:36.988 15:39:58 -- common/autotest_common.sh@861 -- # break 00:14:36.988 15:39:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:36.988 15:39:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:36.988 15:39:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.988 1+0 records in 00:14:36.988 1+0 records out 00:14:36.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00098648 s, 4.2 MB/s 00:14:36.988 15:39:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.988 15:39:58 -- common/autotest_common.sh@874 -- # size=4096 00:14:36.988 15:39:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.988 15:39:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:36.988 15:39:58 -- common/autotest_common.sh@877 -- # return 0 00:14:36.988 15:39:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.988 15:39:58 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:36.988 15:39:58 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:36.988 15:39:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:36.988 15:39:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:37.246 15:39:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:37.246 { 00:14:37.246 "nbd_device": "/dev/nbd0", 00:14:37.247 "bdev_name": "nvme0n1" 00:14:37.247 }, 00:14:37.247 { 00:14:37.247 "nbd_device": "/dev/nbd1", 00:14:37.247 "bdev_name": "nvme1n1" 00:14:37.247 }, 00:14:37.247 { 00:14:37.247 "nbd_device": "/dev/nbd10", 00:14:37.247 "bdev_name": "nvme1n2" 00:14:37.247 }, 00:14:37.247 { 00:14:37.247 "nbd_device": "/dev/nbd11", 00:14:37.247 "bdev_name": "nvme1n3" 00:14:37.247 }, 00:14:37.247 { 00:14:37.247 "nbd_device": "/dev/nbd12", 00:14:37.247 "bdev_name": "nvme2n1" 00:14:37.247 }, 00:14:37.247 { 00:14:37.247 "nbd_device": "/dev/nbd13", 00:14:37.247 "bdev_name": "nvme3n1" 00:14:37.247 } 00:14:37.247 ]' 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:37.247 { 00:14:37.247 "nbd_device": "/dev/nbd0", 00:14:37.247 "bdev_name": "nvme0n1" 00:14:37.247 }, 00:14:37.247 { 00:14:37.247 "nbd_device": "/dev/nbd1", 00:14:37.247 "bdev_name": "nvme1n1" 00:14:37.247 }, 00:14:37.247 { 00:14:37.247 "nbd_device": "/dev/nbd10", 00:14:37.247 "bdev_name": "nvme1n2" 00:14:37.247 }, 00:14:37.247 { 00:14:37.247 "nbd_device": "/dev/nbd11", 00:14:37.247 "bdev_name": "nvme1n3" 00:14:37.247 }, 00:14:37.247 { 00:14:37.247 "nbd_device": "/dev/nbd12", 00:14:37.247 "bdev_name": "nvme2n1" 00:14:37.247 }, 00:14:37.247 { 00:14:37.247 "nbd_device": "/dev/nbd13", 00:14:37.247 "bdev_name": "nvme3n1" 00:14:37.247 } 00:14:37.247 ]' 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:37.247 /dev/nbd1 00:14:37.247 /dev/nbd10 00:14:37.247 /dev/nbd11 00:14:37.247 /dev/nbd12 00:14:37.247 /dev/nbd13' 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:37.247 /dev/nbd1 00:14:37.247 /dev/nbd10 00:14:37.247 /dev/nbd11 00:14:37.247 /dev/nbd12 00:14:37.247 /dev/nbd13' 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@65 -- # count=6 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@66 -- # echo 6 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@95 -- # count=6 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:37.247 256+0 records in 00:14:37.247 256+0 records out 00:14:37.247 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00938959 s, 112 MB/s 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:37.247 256+0 records in 00:14:37.247 256+0 records out 00:14:37.247 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126057 s, 8.3 MB/s 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:37.247 15:39:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:37.505 256+0 records in 00:14:37.505 256+0 records out 00:14:37.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127105 s, 8.2 MB/s 00:14:37.505 15:39:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:37.505 15:39:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:37.505 256+0 records in 00:14:37.505 256+0 records out 00:14:37.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136195 s, 7.7 MB/s 00:14:37.505 15:39:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:37.505 15:39:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:37.763 256+0 records in 00:14:37.763 256+0 records out 00:14:37.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137628 s, 7.6 MB/s 00:14:37.763 15:39:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:37.763 15:39:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:38.021 256+0 records in 00:14:38.021 256+0 records out 00:14:38.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152997 s, 6.9 MB/s 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:38.021 256+0 records in 00:14:38.021 256+0 records out 00:14:38.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121174 s, 8.7 MB/s 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@51 -- # local i 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.021 15:39:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:38.279 15:39:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:38.279 15:39:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:38.279 15:39:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:38.279 15:39:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:38.279 15:39:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:38.279 15:39:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:38.279 15:39:59 -- bdev/nbd_common.sh@41 -- # break 00:14:38.279 15:39:59 -- bdev/nbd_common.sh@45 -- # return 0 00:14:38.279 15:39:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.279 15:39:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:38.535 15:40:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:38.535 15:40:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:38.535 15:40:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:38.535 15:40:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:38.535 15:40:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:38.535 15:40:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:38.535 15:40:00 -- bdev/nbd_common.sh@41 -- # break 00:14:38.535 15:40:00 -- bdev/nbd_common.sh@45 -- # return 0 00:14:38.535 15:40:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.535 15:40:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:39.099 15:40:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:39.099 15:40:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:39.099 15:40:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:39.099 15:40:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.099 15:40:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.099 15:40:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:39.099 15:40:00 -- bdev/nbd_common.sh@41 -- # break 00:14:39.099 15:40:00 -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.099 15:40:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.099 15:40:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:39.099 15:40:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:39.099 15:40:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:39.099 15:40:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:39.099 15:40:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.100 15:40:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.100 15:40:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:39.100 15:40:00 -- bdev/nbd_common.sh@41 -- # break 00:14:39.100 15:40:00 -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.100 15:40:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.100 15:40:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:39.665 15:40:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:39.665 15:40:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:39.665 15:40:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:39.665 15:40:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.665 15:40:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.665 15:40:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:39.665 15:40:00 -- bdev/nbd_common.sh@41 -- # break 00:14:39.666 15:40:00 -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.666 15:40:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.666 15:40:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:39.666 15:40:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:39.666 15:40:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:39.666 15:40:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:39.666 15:40:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.666 15:40:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.666 15:40:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:39.666 15:40:01 -- bdev/nbd_common.sh@41 -- # break 00:14:39.666 15:40:01 -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.924 15:40:01 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:39.924 15:40:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:39.924 15:40:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:40.182 15:40:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:40.182 15:40:01 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:40.182 15:40:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:40.182 15:40:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:40.182 15:40:01 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:40.182 15:40:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:40.182 15:40:01 -- bdev/nbd_common.sh@65 -- # true 00:14:40.182 15:40:01 -- bdev/nbd_common.sh@65 -- # count=0 00:14:40.182 15:40:01 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:40.182 15:40:01 -- bdev/nbd_common.sh@104 -- # count=0 00:14:40.182 15:40:01 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:40.182 15:40:01 -- bdev/nbd_common.sh@109 -- # return 0 00:14:40.182 15:40:01 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:40.182 15:40:01 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:40.182 15:40:01 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:40.182 15:40:01 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:14:40.182 15:40:01 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:14:40.182 15:40:01 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:40.440 malloc_lvol_verify 00:14:40.440 15:40:01 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:40.698 2232370b-e2ca-4dfd-b08c-3331d0459441 00:14:40.698 15:40:02 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:40.955 921c3602-f96e-4cb0-894c-c17a700e8afe 00:14:40.955 15:40:02 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:41.213 /dev/nbd0 00:14:41.213 15:40:02 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:14:41.213 mke2fs 1.46.5 (30-Dec-2021) 00:14:41.213 Discarding device blocks: 0/4096 done 00:14:41.213 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:41.213 00:14:41.213 Allocating group tables: 0/1 done 00:14:41.213 Writing inode tables: 0/1 done 00:14:41.213 Creating journal (1024 blocks): done 00:14:41.213 Writing superblocks and filesystem accounting information: 0/1 done 00:14:41.213 00:14:41.213 15:40:02 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:14:41.213 15:40:02 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:41.213 15:40:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:41.213 15:40:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:41.213 15:40:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:41.213 15:40:02 -- bdev/nbd_common.sh@51 -- # local i 00:14:41.213 15:40:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.213 15:40:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:41.471 15:40:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:41.471 15:40:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:41.471 15:40:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:41.471 15:40:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:41.471 15:40:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:41.471 15:40:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:41.471 15:40:02 -- bdev/nbd_common.sh@41 -- # break 00:14:41.471 15:40:02 -- bdev/nbd_common.sh@45 -- # return 0 00:14:41.471 15:40:02 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:14:41.471 15:40:02 -- bdev/nbd_common.sh@147 -- # return 0 00:14:41.471 15:40:02 -- bdev/blockdev.sh@324 -- # killprocess 68681 00:14:41.471 15:40:02 -- common/autotest_common.sh@926 -- # '[' -z 68681 ']' 00:14:41.471 15:40:02 -- common/autotest_common.sh@930 -- # kill -0 68681 00:14:41.471 15:40:02 -- common/autotest_common.sh@931 -- # uname 00:14:41.471 15:40:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:41.471 15:40:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68681 00:14:41.471 15:40:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:41.471 15:40:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:41.471 killing process with pid 68681 00:14:41.471 15:40:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68681' 00:14:41.471 15:40:02 -- common/autotest_common.sh@945 -- # kill 68681 00:14:41.471 15:40:02 -- common/autotest_common.sh@950 -- # wait 68681 00:14:42.845 15:40:04 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:14:42.845 00:14:42.845 real 0m12.483s 00:14:42.845 user 0m17.842s 00:14:42.845 sys 0m3.971s 00:14:42.845 15:40:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:42.845 ************************************ 00:14:42.845 END TEST bdev_nbd 00:14:42.845 ************************************ 00:14:42.845 15:40:04 -- common/autotest_common.sh@10 -- # set +x 00:14:42.845 15:40:04 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:14:42.845 15:40:04 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:14:42.845 15:40:04 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:14:42.845 15:40:04 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:14:42.845 15:40:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:42.845 15:40:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:42.845 15:40:04 -- common/autotest_common.sh@10 -- # set +x 00:14:42.846 ************************************ 00:14:42.846 START TEST bdev_fio 00:14:42.846 ************************************ 00:14:42.846 15:40:04 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:14:42.846 15:40:04 -- bdev/blockdev.sh@329 -- # local env_context 00:14:42.846 15:40:04 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:42.846 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:42.846 15:40:04 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:42.846 15:40:04 -- bdev/blockdev.sh@337 -- # echo '' 00:14:42.846 15:40:04 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:14:42.846 15:40:04 -- bdev/blockdev.sh@337 -- # env_context= 00:14:42.846 15:40:04 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:42.846 15:40:04 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:42.846 15:40:04 -- common/autotest_common.sh@1260 -- # local workload=verify 00:14:42.846 15:40:04 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:14:42.846 15:40:04 -- common/autotest_common.sh@1262 -- # local env_context= 00:14:42.846 15:40:04 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:14:42.846 15:40:04 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:42.846 15:40:04 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:14:42.846 15:40:04 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:14:42.846 15:40:04 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:42.846 15:40:04 -- common/autotest_common.sh@1280 -- # cat 00:14:42.846 15:40:04 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:14:42.846 15:40:04 -- common/autotest_common.sh@1293 -- # cat 00:14:42.846 15:40:04 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:14:42.846 15:40:04 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:14:42.846 15:40:04 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:42.846 15:40:04 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:14:42.846 15:40:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:14:42.846 15:40:04 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:14:42.846 15:40:04 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:14:42.846 15:40:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:14:42.846 15:40:04 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:14:42.846 15:40:04 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:14:42.846 15:40:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:14:42.846 15:40:04 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:14:42.846 15:40:04 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:14:42.846 15:40:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:14:42.846 15:40:04 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:14:42.846 15:40:04 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:14:42.846 15:40:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:14:42.846 15:40:04 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:14:42.846 15:40:04 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:14:42.846 15:40:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:14:42.846 15:40:04 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:14:42.846 15:40:04 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:14:42.846 15:40:04 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:42.846 15:40:04 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:42.846 15:40:04 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:14:42.846 15:40:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:42.846 15:40:04 -- common/autotest_common.sh@10 -- # set +x 00:14:42.846 ************************************ 00:14:42.846 START TEST bdev_fio_rw_verify 00:14:42.846 ************************************ 00:14:42.846 15:40:04 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:42.846 15:40:04 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:42.846 15:40:04 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:14:42.846 15:40:04 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:42.846 15:40:04 -- common/autotest_common.sh@1318 -- # local sanitizers 00:14:42.846 15:40:04 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:42.846 15:40:04 -- common/autotest_common.sh@1320 -- # shift 00:14:42.846 15:40:04 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:14:42.846 15:40:04 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:14:42.846 15:40:04 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:42.846 15:40:04 -- common/autotest_common.sh@1324 -- # grep libasan 00:14:42.846 15:40:04 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:14:42.846 15:40:04 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:42.846 15:40:04 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:42.846 15:40:04 -- common/autotest_common.sh@1326 -- # break 00:14:42.846 15:40:04 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:42.846 15:40:04 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:42.846 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:42.846 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:42.846 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:42.846 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:42.846 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:42.846 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:42.846 fio-3.35 00:14:42.846 Starting 6 threads 00:14:55.051 00:14:55.051 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=69100: Wed Jul 24 15:40:15 2024 00:14:55.051 read: IOPS=25.1k, BW=98.0MiB/s (103MB/s)(980MiB/10001msec) 00:14:55.051 slat (usec): min=3, max=3140, avg= 7.28, stdev= 7.65 00:14:55.051 clat (usec): min=139, max=15918, avg=748.47, stdev=345.73 00:14:55.051 lat (usec): min=142, max=15928, avg=755.75, stdev=346.40 00:14:55.051 clat percentiles (usec): 00:14:55.051 | 50.000th=[ 750], 99.000th=[ 1631], 99.900th=[ 3916], 99.990th=[ 8848], 00:14:55.051 | 99.999th=[15926] 00:14:55.051 write: IOPS=25.4k, BW=99.3MiB/s (104MB/s)(993MiB/10001msec); 0 zone resets 00:14:55.051 slat (usec): min=14, max=1995, avg=30.06, stdev=27.90 00:14:55.051 clat (usec): min=93, max=16099, avg=827.20, stdev=391.00 00:14:55.051 lat (usec): min=125, max=16345, avg=857.26, stdev=393.46 00:14:55.051 clat percentiles (usec): 00:14:55.051 | 50.000th=[ 816], 99.000th=[ 1778], 99.900th=[ 4621], 99.990th=[15795], 00:14:55.051 | 99.999th=[16057] 00:14:55.051 bw ( KiB/s): min=91654, max=121656, per=100.00%, avg=102009.26, stdev=1308.12, samples=114 00:14:55.051 iops : min=22912, max=30414, avg=25502.16, stdev=327.04, samples=114 00:14:55.051 lat (usec) : 100=0.01%, 250=1.73%, 500=14.89%, 750=28.90%, 1000=36.26% 00:14:55.051 lat (msec) : 2=17.63%, 4=0.46%, 10=0.12%, 20=0.01% 00:14:55.051 cpu : usr=60.94%, sys=25.73%, ctx=5846, majf=0, minf=23775 00:14:55.051 IO depths : 1=12.3%, 2=24.9%, 4=50.1%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:55.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.051 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.051 issued rwts: total=250813,254202,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.051 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:55.051 00:14:55.051 Run status group 0 (all jobs): 00:14:55.051 READ: bw=98.0MiB/s (103MB/s), 98.0MiB/s-98.0MiB/s (103MB/s-103MB/s), io=980MiB (1027MB), run=10001-10001msec 00:14:55.051 WRITE: bw=99.3MiB/s (104MB/s), 99.3MiB/s-99.3MiB/s (104MB/s-104MB/s), io=993MiB (1041MB), run=10001-10001msec 00:14:55.051 ----------------------------------------------------- 00:14:55.051 Suppressions used: 00:14:55.051 count bytes template 00:14:55.051 6 48 /usr/src/fio/parse.c 00:14:55.051 3227 309792 /usr/src/fio/iolog.c 00:14:55.051 1 8 libtcmalloc_minimal.so 00:14:55.051 1 904 libcrypto.so 00:14:55.051 ----------------------------------------------------- 00:14:55.051 00:14:55.051 00:14:55.051 real 0m12.371s 00:14:55.051 user 0m38.582s 00:14:55.051 sys 0m15.819s 00:14:55.051 15:40:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.051 15:40:16 -- common/autotest_common.sh@10 -- # set +x 00:14:55.051 ************************************ 00:14:55.051 END TEST bdev_fio_rw_verify 00:14:55.051 ************************************ 00:14:55.051 15:40:16 -- bdev/blockdev.sh@348 -- # rm -f 00:14:55.051 15:40:16 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:55.051 15:40:16 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:55.051 15:40:16 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:55.051 15:40:16 -- common/autotest_common.sh@1260 -- # local workload=trim 00:14:55.051 15:40:16 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:14:55.051 15:40:16 -- common/autotest_common.sh@1262 -- # local env_context= 00:14:55.051 15:40:16 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:14:55.051 15:40:16 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:55.051 15:40:16 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:14:55.051 15:40:16 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:14:55.051 15:40:16 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:55.051 15:40:16 -- common/autotest_common.sh@1280 -- # cat 00:14:55.051 15:40:16 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:14:55.051 15:40:16 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:14:55.052 15:40:16 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:14:55.052 15:40:16 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2eadbef8-a3d3-4d9f-8ca4-e0e2587e6c29"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2eadbef8-a3d3-4d9f-8ca4-e0e2587e6c29",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "33869ad8-b8e8-4337-820e-b553614d2f7f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "33869ad8-b8e8-4337-820e-b553614d2f7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "8ae33ee1-6a61-4fdb-9f88-8fe818b01f6a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8ae33ee1-6a61-4fdb-9f88-8fe818b01f6a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "d973ea2e-b5fc-43fe-b4a5-a25c431f9fb6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d973ea2e-b5fc-43fe-b4a5-a25c431f9fb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a92a309c-a08a-4ba8-8ecb-1c1342475183"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a92a309c-a08a-4ba8-8ecb-1c1342475183",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "691a2e5a-fb11-47c6-8ed5-d79d2649807a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "691a2e5a-fb11-47c6-8ed5-d79d2649807a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:14:55.052 15:40:16 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:55.309 15:40:16 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:14:55.309 15:40:16 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:55.309 /home/vagrant/spdk_repo/spdk 00:14:55.309 15:40:16 -- bdev/blockdev.sh@360 -- # popd 00:14:55.309 15:40:16 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:14:55.309 15:40:16 -- bdev/blockdev.sh@362 -- # return 0 00:14:55.309 00:14:55.309 real 0m12.547s 00:14:55.309 user 0m38.686s 00:14:55.309 sys 0m15.889s 00:14:55.309 15:40:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.309 ************************************ 00:14:55.309 END TEST bdev_fio 00:14:55.309 15:40:16 -- common/autotest_common.sh@10 -- # set +x 00:14:55.309 ************************************ 00:14:55.309 15:40:16 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:55.309 15:40:16 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:55.309 15:40:16 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:14:55.309 15:40:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:55.309 15:40:16 -- common/autotest_common.sh@10 -- # set +x 00:14:55.309 ************************************ 00:14:55.309 START TEST bdev_verify 00:14:55.309 ************************************ 00:14:55.309 15:40:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:55.309 [2024-07-24 15:40:16.797763] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:55.309 [2024-07-24 15:40:16.797918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69272 ] 00:14:55.567 [2024-07-24 15:40:16.975657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:55.825 [2024-07-24 15:40:17.207254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.825 [2024-07-24 15:40:17.207254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.084 Running I/O for 5 seconds... 00:15:01.383 00:15:01.383 Latency(us) 00:15:01.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.383 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:01.383 Verification LBA range: start 0x0 length 0x20000 00:15:01.383 nvme0n1 : 5.07 2415.63 9.44 0.00 0.00 52754.66 13762.56 74353.57 00:15:01.383 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:01.383 Verification LBA range: start 0x20000 length 0x20000 00:15:01.383 nvme0n1 : 5.05 2497.66 9.76 0.00 0.00 51074.80 13941.29 72447.07 00:15:01.383 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:01.383 Verification LBA range: start 0x0 length 0x80000 00:15:01.383 nvme1n1 : 5.07 2307.38 9.01 0.00 0.00 55198.99 14000.87 74830.20 00:15:01.383 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:01.383 Verification LBA range: start 0x80000 length 0x80000 00:15:01.383 nvme1n1 : 5.08 2397.39 9.36 0.00 0.00 53172.84 10307.03 69587.32 00:15:01.383 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:01.383 Verification LBA range: start 0x0 length 0x80000 00:15:01.383 nvme1n2 : 5.07 2418.13 9.45 0.00 0.00 52661.35 10009.13 68634.07 00:15:01.383 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:01.383 Verification LBA range: start 0x80000 length 0x80000 00:15:01.383 nvme1n2 : 5.06 2398.73 9.37 0.00 0.00 53054.55 14179.61 77213.32 00:15:01.383 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:01.383 Verification LBA range: start 0x0 length 0x80000 00:15:01.383 nvme1n3 : 5.08 2407.55 9.40 0.00 0.00 52849.52 15490.33 79119.83 00:15:01.383 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:01.383 Verification LBA range: start 0x80000 length 0x80000 00:15:01.383 nvme1n3 : 5.07 2302.52 8.99 0.00 0.00 55257.48 11796.48 73400.32 00:15:01.383 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:01.383 Verification LBA range: start 0x0 length 0xbd0bd 00:15:01.383 nvme2n1 : 5.07 2641.63 10.32 0.00 0.00 48128.05 7328.12 64821.06 00:15:01.383 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:01.383 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:01.383 nvme2n1 : 5.07 2751.86 10.75 0.00 0.00 46234.12 3455.53 70063.94 00:15:01.383 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:01.383 Verification LBA range: start 0x0 length 0xa0000 00:15:01.383 nvme3n1 : 5.07 2297.16 8.97 0.00 0.00 55163.63 8579.26 77213.32 00:15:01.383 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:01.383 Verification LBA range: start 0xa0000 length 0xa0000 00:15:01.383 nvme3n1 : 5.08 2456.03 9.59 0.00 0.00 51618.43 4885.41 73876.95 00:15:01.383 =================================================================================================================== 00:15:01.383 Total : 29291.68 114.42 0.00 0.00 52124.53 3455.53 79119.83 00:15:02.761 00:15:02.761 real 0m7.277s 00:15:02.761 user 0m9.713s 00:15:02.761 sys 0m3.101s 00:15:02.761 15:40:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.761 15:40:23 -- common/autotest_common.sh@10 -- # set +x 00:15:02.761 ************************************ 00:15:02.761 END TEST bdev_verify 00:15:02.761 ************************************ 00:15:02.761 15:40:24 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:02.761 15:40:24 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:15:02.761 15:40:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:02.761 15:40:24 -- common/autotest_common.sh@10 -- # set +x 00:15:02.761 ************************************ 00:15:02.761 START TEST bdev_verify_big_io 00:15:02.761 ************************************ 00:15:02.761 15:40:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:02.761 [2024-07-24 15:40:24.126875] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:02.761 [2024-07-24 15:40:24.127060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69381 ] 00:15:02.761 [2024-07-24 15:40:24.303215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:03.019 [2024-07-24 15:40:24.537077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.019 [2024-07-24 15:40:24.537116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.584 Running I/O for 5 seconds... 00:15:10.195 00:15:10.195 Latency(us) 00:15:10.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.195 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:10.195 Verification LBA range: start 0x0 length 0x2000 00:15:10.195 nvme0n1 : 5.74 225.96 14.12 0.00 0.00 548943.54 63391.19 777852.74 00:15:10.195 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:10.195 Verification LBA range: start 0x2000 length 0x2000 00:15:10.196 nvme0n1 : 5.74 242.48 15.16 0.00 0.00 512739.26 44802.79 728283.69 00:15:10.196 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:10.196 Verification LBA range: start 0x0 length 0x8000 00:15:10.196 nvme1n1 : 5.65 229.75 14.36 0.00 0.00 533919.36 66250.94 713031.68 00:15:10.196 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:10.196 Verification LBA range: start 0x8000 length 0x8000 00:15:10.196 nvme1n1 : 5.76 210.69 13.17 0.00 0.00 576689.32 197322.94 720657.69 00:15:10.196 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:10.196 Verification LBA range: start 0x0 length 0x8000 00:15:10.196 nvme1n2 : 5.75 196.42 12.28 0.00 0.00 602745.13 57433.37 724470.69 00:15:10.196 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:10.196 Verification LBA range: start 0x8000 length 0x8000 00:15:10.196 nvme1n2 : 5.74 211.49 13.22 0.00 0.00 568891.35 63391.19 720657.69 00:15:10.196 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:10.196 Verification LBA range: start 0x0 length 0x8000 00:15:10.196 nvme1n3 : 5.65 199.88 12.49 0.00 0.00 583320.95 69587.32 709218.68 00:15:10.196 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:10.196 Verification LBA range: start 0x8000 length 0x8000 00:15:10.196 nvme1n3 : 5.75 242.01 15.13 0.00 0.00 484185.33 62437.93 545259.52 00:15:10.196 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:10.196 Verification LBA range: start 0x0 length 0xbd0b 00:15:10.196 nvme2n1 : 5.76 241.57 15.10 0.00 0.00 468539.87 59816.49 463279.94 00:15:10.196 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:10.196 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:10.196 nvme2n1 : 5.77 256.12 16.01 0.00 0.00 454443.50 5600.35 434682.41 00:15:10.196 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:10.196 Verification LBA range: start 0x0 length 0xa000 00:15:10.196 nvme3n1 : 5.78 270.88 16.93 0.00 0.00 411168.30 2993.80 428962.91 00:15:10.196 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:10.196 Verification LBA range: start 0xa000 length 0xa000 00:15:10.196 nvme3n1 : 5.77 256.23 16.01 0.00 0.00 440870.05 7268.54 409897.89 00:15:10.196 =================================================================================================================== 00:15:10.196 Total : 2783.48 173.97 0.00 0.00 509533.80 2993.80 777852.74 00:15:10.761 00:15:10.761 real 0m8.230s 00:15:10.761 user 0m14.547s 00:15:10.761 sys 0m0.652s 00:15:10.761 15:40:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.761 ************************************ 00:15:10.761 15:40:32 -- common/autotest_common.sh@10 -- # set +x 00:15:10.761 END TEST bdev_verify_big_io 00:15:10.761 ************************************ 00:15:10.761 15:40:32 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:10.761 15:40:32 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:15:10.761 15:40:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:10.761 15:40:32 -- common/autotest_common.sh@10 -- # set +x 00:15:10.761 ************************************ 00:15:10.761 START TEST bdev_write_zeroes 00:15:10.761 ************************************ 00:15:10.761 15:40:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:11.018 [2024-07-24 15:40:32.380181] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:11.018 [2024-07-24 15:40:32.380384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69494 ] 00:15:11.018 [2024-07-24 15:40:32.544580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.275 [2024-07-24 15:40:32.754630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.840 Running I/O for 1 seconds... 00:15:12.774 00:15:12.774 Latency(us) 00:15:12.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.774 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.774 nvme0n1 : 1.01 9854.04 38.49 0.00 0.00 12974.86 7506.85 18826.71 00:15:12.774 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.774 nvme1n1 : 1.01 9841.96 38.45 0.00 0.00 12978.95 7536.64 19660.80 00:15:12.774 Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.774 nvme1n2 : 1.02 9829.96 38.40 0.00 0.00 12982.50 7536.64 20256.58 00:15:12.774 Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.774 nvme1n3 : 1.02 9817.70 38.35 0.00 0.00 12991.28 7536.64 20256.58 00:15:12.774 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.774 nvme2n1 : 1.02 15395.63 60.14 0.00 0.00 8265.73 2904.44 15609.48 00:15:12.774 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.774 nvme3n1 : 1.02 9775.10 38.18 0.00 0.00 12991.62 5898.24 17992.61 00:15:12.774 =================================================================================================================== 00:15:12.774 Total : 64514.38 252.01 0.00 0.00 11852.60 2904.44 20256.58 00:15:14.148 00:15:14.148 real 0m3.014s 00:15:14.148 user 0m2.268s 00:15:14.148 sys 0m0.569s 00:15:14.148 15:40:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:14.148 15:40:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.148 ************************************ 00:15:14.148 END TEST bdev_write_zeroes 00:15:14.148 ************************************ 00:15:14.148 15:40:35 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:14.148 15:40:35 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:15:14.148 15:40:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:14.148 15:40:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.148 ************************************ 00:15:14.148 START TEST bdev_json_nonenclosed 00:15:14.148 ************************************ 00:15:14.148 15:40:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:14.148 [2024-07-24 15:40:35.456337] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:14.148 [2024-07-24 15:40:35.457031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69553 ] 00:15:14.148 [2024-07-24 15:40:35.626744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.405 [2024-07-24 15:40:35.864681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.405 [2024-07-24 15:40:35.864902] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:14.405 [2024-07-24 15:40:35.864936] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:14.970 00:15:14.970 real 0m0.913s 00:15:14.970 user 0m0.671s 00:15:14.970 sys 0m0.135s 00:15:14.970 15:40:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:14.970 15:40:36 -- common/autotest_common.sh@10 -- # set +x 00:15:14.970 ************************************ 00:15:14.970 END TEST bdev_json_nonenclosed 00:15:14.970 ************************************ 00:15:14.970 15:40:36 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:14.970 15:40:36 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:15:14.970 15:40:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:14.970 15:40:36 -- common/autotest_common.sh@10 -- # set +x 00:15:14.970 ************************************ 00:15:14.970 START TEST bdev_json_nonarray 00:15:14.970 ************************************ 00:15:14.970 15:40:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:14.970 [2024-07-24 15:40:36.412017] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:14.970 [2024-07-24 15:40:36.412185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69580 ] 00:15:15.228 [2024-07-24 15:40:36.577027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.228 [2024-07-24 15:40:36.774296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.228 [2024-07-24 15:40:36.774504] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:15.228 [2024-07-24 15:40:36.774534] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:15.794 00:15:15.794 real 0m0.847s 00:15:15.794 user 0m0.617s 00:15:15.794 sys 0m0.123s 00:15:15.794 15:40:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:15.794 15:40:37 -- common/autotest_common.sh@10 -- # set +x 00:15:15.794 ************************************ 00:15:15.794 END TEST bdev_json_nonarray 00:15:15.795 ************************************ 00:15:15.795 15:40:37 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:15:15.795 15:40:37 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:15:15.795 15:40:37 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:15:15.795 15:40:37 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:15:15.795 15:40:37 -- bdev/blockdev.sh@809 -- # cleanup 00:15:15.795 15:40:37 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:15.795 15:40:37 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:15.795 15:40:37 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:15:15.795 15:40:37 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:15:15.795 15:40:37 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:15:15.795 15:40:37 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:15:15.795 15:40:37 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:16.729 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:19.256 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:15:19.256 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:15:19.256 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:15:19.256 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:15:19.256 00:15:19.256 real 1m3.716s 00:15:19.256 user 1m43.667s 00:15:19.256 sys 0m34.210s 00:15:19.256 15:40:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.256 ************************************ 00:15:19.256 END TEST blockdev_xnvme 00:15:19.256 15:40:40 -- common/autotest_common.sh@10 -- # set +x 00:15:19.256 ************************************ 00:15:19.256 15:40:40 -- spdk/autotest.sh@259 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:19.256 15:40:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:19.256 15:40:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:19.256 15:40:40 -- common/autotest_common.sh@10 -- # set +x 00:15:19.256 ************************************ 00:15:19.256 START TEST ublk 00:15:19.256 ************************************ 00:15:19.256 15:40:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:19.256 * Looking for test storage... 00:15:19.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:19.256 15:40:40 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:19.256 15:40:40 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:19.256 15:40:40 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:19.256 15:40:40 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:19.256 15:40:40 -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:19.256 15:40:40 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:19.256 15:40:40 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:19.256 15:40:40 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:19.256 15:40:40 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:19.256 15:40:40 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:19.256 15:40:40 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:19.256 15:40:40 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:19.256 15:40:40 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:19.256 15:40:40 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:19.256 15:40:40 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:19.256 15:40:40 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:19.256 15:40:40 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:19.256 15:40:40 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:19.256 15:40:40 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:19.256 15:40:40 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:19.256 15:40:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:19.256 15:40:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:19.256 15:40:40 -- common/autotest_common.sh@10 -- # set +x 00:15:19.256 ************************************ 00:15:19.256 START TEST test_save_ublk_config 00:15:19.256 ************************************ 00:15:19.256 15:40:40 -- common/autotest_common.sh@1104 -- # test_save_config 00:15:19.256 15:40:40 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:19.256 15:40:40 -- ublk/ublk.sh@103 -- # tgtpid=69879 00:15:19.256 15:40:40 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:19.256 15:40:40 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:19.256 15:40:40 -- ublk/ublk.sh@106 -- # waitforlisten 69879 00:15:19.256 15:40:40 -- common/autotest_common.sh@819 -- # '[' -z 69879 ']' 00:15:19.256 15:40:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.256 15:40:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:19.256 15:40:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.256 15:40:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:19.256 15:40:40 -- common/autotest_common.sh@10 -- # set +x 00:15:19.256 [2024-07-24 15:40:40.759250] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:19.256 [2024-07-24 15:40:40.759495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69879 ] 00:15:19.514 [2024-07-24 15:40:40.938495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.773 [2024-07-24 15:40:41.180896] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:19.773 [2024-07-24 15:40:41.181178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.146 15:40:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:21.146 15:40:42 -- common/autotest_common.sh@852 -- # return 0 00:15:21.146 15:40:42 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:21.146 15:40:42 -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:21.146 15:40:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.146 15:40:42 -- common/autotest_common.sh@10 -- # set +x 00:15:21.146 [2024-07-24 15:40:42.533132] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:21.146 malloc0 00:15:21.146 [2024-07-24 15:40:42.604243] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:21.146 [2024-07-24 15:40:42.604358] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:21.146 [2024-07-24 15:40:42.604374] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:21.146 [2024-07-24 15:40:42.604387] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:21.146 [2024-07-24 15:40:42.613195] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:21.146 [2024-07-24 15:40:42.613226] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:21.146 [2024-07-24 15:40:42.620118] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:21.146 [2024-07-24 15:40:42.620240] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:21.146 [2024-07-24 15:40:42.637113] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:21.146 0 00:15:21.146 15:40:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.146 15:40:42 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:21.146 15:40:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.146 15:40:42 -- common/autotest_common.sh@10 -- # set +x 00:15:21.404 15:40:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.404 15:40:42 -- ublk/ublk.sh@115 -- # config='{ 00:15:21.404 "subsystems": [ 00:15:21.404 { 00:15:21.404 "subsystem": "iobuf", 00:15:21.404 "config": [ 00:15:21.404 { 00:15:21.404 "method": "iobuf_set_options", 00:15:21.404 "params": { 00:15:21.404 "small_pool_count": 8192, 00:15:21.404 "large_pool_count": 1024, 00:15:21.404 "small_bufsize": 8192, 00:15:21.404 "large_bufsize": 135168 00:15:21.404 } 00:15:21.404 } 00:15:21.404 ] 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "subsystem": "sock", 00:15:21.404 "config": [ 00:15:21.404 { 00:15:21.404 "method": "sock_impl_set_options", 00:15:21.404 "params": { 00:15:21.404 "impl_name": "posix", 00:15:21.404 "recv_buf_size": 2097152, 00:15:21.404 "send_buf_size": 2097152, 00:15:21.404 "enable_recv_pipe": true, 00:15:21.404 "enable_quickack": false, 00:15:21.404 "enable_placement_id": 0, 00:15:21.404 "enable_zerocopy_send_server": true, 00:15:21.404 "enable_zerocopy_send_client": false, 00:15:21.404 "zerocopy_threshold": 0, 00:15:21.404 "tls_version": 0, 00:15:21.404 "enable_ktls": false 00:15:21.404 } 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "method": "sock_impl_set_options", 00:15:21.404 "params": { 00:15:21.404 "impl_name": "ssl", 00:15:21.404 "recv_buf_size": 4096, 00:15:21.404 "send_buf_size": 4096, 00:15:21.404 "enable_recv_pipe": true, 00:15:21.404 "enable_quickack": false, 00:15:21.404 "enable_placement_id": 0, 00:15:21.404 "enable_zerocopy_send_server": true, 00:15:21.404 "enable_zerocopy_send_client": false, 00:15:21.404 "zerocopy_threshold": 0, 00:15:21.404 "tls_version": 0, 00:15:21.404 "enable_ktls": false 00:15:21.404 } 00:15:21.404 } 00:15:21.404 ] 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "subsystem": "vmd", 00:15:21.404 "config": [] 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "subsystem": "accel", 00:15:21.404 "config": [ 00:15:21.404 { 00:15:21.404 "method": "accel_set_options", 00:15:21.404 "params": { 00:15:21.404 "small_cache_size": 128, 00:15:21.404 "large_cache_size": 16, 00:15:21.404 "task_count": 2048, 00:15:21.404 "sequence_count": 2048, 00:15:21.404 "buf_count": 2048 00:15:21.404 } 00:15:21.404 } 00:15:21.404 ] 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "subsystem": "bdev", 00:15:21.404 "config": [ 00:15:21.404 { 00:15:21.404 "method": "bdev_set_options", 00:15:21.404 "params": { 00:15:21.404 "bdev_io_pool_size": 65535, 00:15:21.404 "bdev_io_cache_size": 256, 00:15:21.404 "bdev_auto_examine": true, 00:15:21.404 "iobuf_small_cache_size": 128, 00:15:21.404 "iobuf_large_cache_size": 16 00:15:21.404 } 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "method": "bdev_raid_set_options", 00:15:21.404 "params": { 00:15:21.404 "process_window_size_kb": 1024 00:15:21.404 } 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "method": "bdev_iscsi_set_options", 00:15:21.404 "params": { 00:15:21.404 "timeout_sec": 30 00:15:21.404 } 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "method": "bdev_nvme_set_options", 00:15:21.404 "params": { 00:15:21.404 "action_on_timeout": "none", 00:15:21.404 "timeout_us": 0, 00:15:21.404 "timeout_admin_us": 0, 00:15:21.404 "keep_alive_timeout_ms": 10000, 00:15:21.404 "transport_retry_count": 4, 00:15:21.404 "arbitration_burst": 0, 00:15:21.404 "low_priority_weight": 0, 00:15:21.404 "medium_priority_weight": 0, 00:15:21.404 "high_priority_weight": 0, 00:15:21.404 "nvme_adminq_poll_period_us": 10000, 00:15:21.404 "nvme_ioq_poll_period_us": 0, 00:15:21.404 "io_queue_requests": 0, 00:15:21.404 "delay_cmd_submit": true, 00:15:21.404 "bdev_retry_count": 3, 00:15:21.404 "transport_ack_timeout": 0, 00:15:21.404 "ctrlr_loss_timeout_sec": 0, 00:15:21.404 "reconnect_delay_sec": 0, 00:15:21.404 "fast_io_fail_timeout_sec": 0, 00:15:21.404 "generate_uuids": false, 00:15:21.404 "transport_tos": 0, 00:15:21.404 "io_path_stat": false, 00:15:21.404 "allow_accel_sequence": false 00:15:21.404 } 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "method": "bdev_nvme_set_hotplug", 00:15:21.404 "params": { 00:15:21.404 "period_us": 100000, 00:15:21.404 "enable": false 00:15:21.404 } 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "method": "bdev_malloc_create", 00:15:21.404 "params": { 00:15:21.404 "name": "malloc0", 00:15:21.404 "num_blocks": 8192, 00:15:21.404 "block_size": 4096, 00:15:21.404 "physical_block_size": 4096, 00:15:21.404 "uuid": "022e90c1-264c-4fe4-aaec-dda71f99a070", 00:15:21.404 "optimal_io_boundary": 0 00:15:21.404 } 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "method": "bdev_wait_for_examine" 00:15:21.404 } 00:15:21.404 ] 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "subsystem": "scsi", 00:15:21.404 "config": null 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "subsystem": "scheduler", 00:15:21.404 "config": [ 00:15:21.404 { 00:15:21.404 "method": "framework_set_scheduler", 00:15:21.404 "params": { 00:15:21.404 "name": "static" 00:15:21.404 } 00:15:21.404 } 00:15:21.404 ] 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "subsystem": "vhost_scsi", 00:15:21.404 "config": [] 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "subsystem": "vhost_blk", 00:15:21.404 "config": [] 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "subsystem": "ublk", 00:15:21.404 "config": [ 00:15:21.404 { 00:15:21.404 "method": "ublk_create_target", 00:15:21.404 "params": { 00:15:21.404 "cpumask": "1" 00:15:21.404 } 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "method": "ublk_start_disk", 00:15:21.404 "params": { 00:15:21.404 "bdev_name": "malloc0", 00:15:21.404 "ublk_id": 0, 00:15:21.404 "num_queues": 1, 00:15:21.404 "queue_depth": 128 00:15:21.404 } 00:15:21.404 } 00:15:21.404 ] 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "subsystem": "nbd", 00:15:21.404 "config": [] 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "subsystem": "nvmf", 00:15:21.404 "config": [ 00:15:21.404 { 00:15:21.404 "method": "nvmf_set_config", 00:15:21.404 "params": { 00:15:21.404 "discovery_filter": "match_any", 00:15:21.404 "admin_cmd_passthru": { 00:15:21.404 "identify_ctrlr": false 00:15:21.404 } 00:15:21.404 } 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "method": "nvmf_set_max_subsystems", 00:15:21.404 "params": { 00:15:21.404 "max_subsystems": 1024 00:15:21.404 } 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "method": "nvmf_set_crdt", 00:15:21.404 "params": { 00:15:21.404 "crdt1": 0, 00:15:21.404 "crdt2": 0, 00:15:21.404 "crdt3": 0 00:15:21.404 } 00:15:21.404 } 00:15:21.404 ] 00:15:21.404 }, 00:15:21.404 { 00:15:21.404 "subsystem": "iscsi", 00:15:21.404 "config": [ 00:15:21.404 { 00:15:21.404 "method": "iscsi_set_options", 00:15:21.404 "params": { 00:15:21.404 "node_base": "iqn.2016-06.io.spdk", 00:15:21.404 "max_sessions": 128, 00:15:21.404 "max_connections_per_session": 2, 00:15:21.404 "max_queue_depth": 64, 00:15:21.404 "default_time2wait": 2, 00:15:21.404 "default_time2retain": 20, 00:15:21.404 "first_burst_length": 8192, 00:15:21.405 "immediate_data": true, 00:15:21.405 "allow_duplicated_isid": false, 00:15:21.405 "error_recovery_level": 0, 00:15:21.405 "nop_timeout": 60, 00:15:21.405 "nop_in_interval": 30, 00:15:21.405 "disable_chap": false, 00:15:21.405 "require_chap": false, 00:15:21.405 "mutual_chap": false, 00:15:21.405 "chap_group": 0, 00:15:21.405 "max_large_datain_per_connection": 64, 00:15:21.405 "max_r2t_per_connection": 4, 00:15:21.405 "pdu_pool_size": 36864, 00:15:21.405 "immediate_data_pool_size": 16384, 00:15:21.405 "data_out_pool_size": 2048 00:15:21.405 } 00:15:21.405 } 00:15:21.405 ] 00:15:21.405 } 00:15:21.405 ] 00:15:21.405 }' 00:15:21.405 15:40:42 -- ublk/ublk.sh@116 -- # killprocess 69879 00:15:21.405 15:40:42 -- common/autotest_common.sh@926 -- # '[' -z 69879 ']' 00:15:21.405 15:40:42 -- common/autotest_common.sh@930 -- # kill -0 69879 00:15:21.405 15:40:42 -- common/autotest_common.sh@931 -- # uname 00:15:21.405 15:40:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:21.405 15:40:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69879 00:15:21.405 15:40:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:21.405 15:40:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:21.405 killing process with pid 69879 00:15:21.405 15:40:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69879' 00:15:21.405 15:40:42 -- common/autotest_common.sh@945 -- # kill 69879 00:15:21.405 15:40:42 -- common/autotest_common.sh@950 -- # wait 69879 00:15:22.797 [2024-07-24 15:40:44.239082] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:22.797 [2024-07-24 15:40:44.268181] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:22.797 [2024-07-24 15:40:44.268387] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:22.797 [2024-07-24 15:40:44.277171] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:22.797 [2024-07-24 15:40:44.277239] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:22.797 [2024-07-24 15:40:44.277253] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:22.797 [2024-07-24 15:40:44.277291] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:22.797 [2024-07-24 15:40:44.281274] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:24.174 15:40:45 -- ublk/ublk.sh@119 -- # tgtpid=69948 00:15:24.174 15:40:45 -- ublk/ublk.sh@121 -- # waitforlisten 69948 00:15:24.174 15:40:45 -- common/autotest_common.sh@819 -- # '[' -z 69948 ']' 00:15:24.174 15:40:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.174 15:40:45 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:24.174 15:40:45 -- ublk/ublk.sh@118 -- # echo '{ 00:15:24.174 "subsystems": [ 00:15:24.174 { 00:15:24.174 "subsystem": "iobuf", 00:15:24.174 "config": [ 00:15:24.174 { 00:15:24.174 "method": "iobuf_set_options", 00:15:24.174 "params": { 00:15:24.174 "small_pool_count": 8192, 00:15:24.174 "large_pool_count": 1024, 00:15:24.174 "small_bufsize": 8192, 00:15:24.174 "large_bufsize": 135168 00:15:24.174 } 00:15:24.174 } 00:15:24.174 ] 00:15:24.174 }, 00:15:24.174 { 00:15:24.174 "subsystem": "sock", 00:15:24.174 "config": [ 00:15:24.174 { 00:15:24.174 "method": "sock_impl_set_options", 00:15:24.174 "params": { 00:15:24.174 "impl_name": "posix", 00:15:24.174 "recv_buf_size": 2097152, 00:15:24.174 "send_buf_size": 2097152, 00:15:24.174 "enable_recv_pipe": true, 00:15:24.174 "enable_quickack": false, 00:15:24.174 "enable_placement_id": 0, 00:15:24.174 "enable_zerocopy_send_server": true, 00:15:24.174 "enable_zerocopy_send_client": false, 00:15:24.174 "zerocopy_threshold": 0, 00:15:24.174 "tls_version": 0, 00:15:24.174 "enable_ktls": false 00:15:24.174 } 00:15:24.174 }, 00:15:24.174 { 00:15:24.174 "method": "sock_impl_set_options", 00:15:24.174 "params": { 00:15:24.174 "impl_name": "ssl", 00:15:24.174 "recv_buf_size": 4096, 00:15:24.174 "send_buf_size": 4096, 00:15:24.174 "enable_recv_pipe": true, 00:15:24.174 "enable_quickack": false, 00:15:24.174 "enable_placement_id": 0, 00:15:24.174 "enable_zerocopy_send_server": true, 00:15:24.174 "enable_zerocopy_send_client": false, 00:15:24.174 "zerocopy_threshold": 0, 00:15:24.174 "tls_version": 0, 00:15:24.174 "enable_ktls": false 00:15:24.174 } 00:15:24.174 } 00:15:24.174 ] 00:15:24.174 }, 00:15:24.174 { 00:15:24.174 "subsystem": "vmd", 00:15:24.174 "config": [] 00:15:24.174 }, 00:15:24.174 { 00:15:24.174 "subsystem": "accel", 00:15:24.174 "config": [ 00:15:24.174 { 00:15:24.174 "method": "accel_set_options", 00:15:24.174 "params": { 00:15:24.174 "small_cache_size": 128, 00:15:24.174 "large_cache_size": 16, 00:15:24.174 "task_count": 2048, 00:15:24.174 "sequence_count": 2048, 00:15:24.174 "buf_count": 2048 00:15:24.174 } 00:15:24.174 } 00:15:24.174 ] 00:15:24.174 }, 00:15:24.174 { 00:15:24.174 "subsystem": "bdev", 00:15:24.174 "config": [ 00:15:24.174 { 00:15:24.174 "method": "bdev_set_options", 00:15:24.174 "params": { 00:15:24.174 "bdev_io_pool_size": 65535, 00:15:24.174 "bdev_io_cache_size": 256, 00:15:24.174 "bdev_auto_examine": true, 00:15:24.174 "iobuf_small_cache_size": 128, 00:15:24.174 "iobuf_large_cache_size": 16 00:15:24.174 } 00:15:24.174 }, 00:15:24.174 { 00:15:24.174 "method": "bdev_raid_set_options", 00:15:24.174 "params": { 00:15:24.174 "process_window_size_kb": 1024 00:15:24.174 } 00:15:24.174 }, 00:15:24.174 { 00:15:24.174 "method": "bdev_iscsi_set_options", 00:15:24.174 "params": { 00:15:24.174 "timeout_sec": 30 00:15:24.174 } 00:15:24.174 }, 00:15:24.174 { 00:15:24.174 "method": "bdev_nvme_set_options", 00:15:24.174 "params": { 00:15:24.174 "action_on_timeout": "none", 00:15:24.174 "timeout_us": 0, 00:15:24.174 "timeout_admin_us": 0, 00:15:24.174 "keep_alive_timeout_ms": 10000, 00:15:24.174 "transport_retry_count": 4, 00:15:24.174 "arbitration_burst": 0, 00:15:24.174 "low_priority_weight": 0, 00:15:24.174 "medium_priority_weight": 0, 00:15:24.174 "high_priority_weight": 0, 00:15:24.174 "nvme_adminq_poll_period_us": 10000, 00:15:24.174 "nvme_ioq_poll_period_us": 0, 00:15:24.174 "io_queue_requests": 0, 00:15:24.174 "delay_cmd_submit": true, 00:15:24.174 "bdev_retry_count": 3, 00:15:24.174 "transport_ack_timeout": 0, 00:15:24.174 "ctrlr_loss_timeout_sec": 0, 00:15:24.174 "reconnect_delay_sec": 0, 00:15:24.174 "fast_io_fail_timeout_sec": 0, 00:15:24.174 "generate_uuids": false, 00:15:24.174 "transport_tos": 0, 00:15:24.174 "io_path_stat": false, 00:15:24.174 "allow_accel_sequence": false 00:15:24.174 } 00:15:24.174 }, 00:15:24.174 { 00:15:24.174 "method": "bdev_nvme_set_hotplug", 00:15:24.174 "params": { 00:15:24.174 "period_us": 100000, 00:15:24.174 "enable": false 00:15:24.174 } 00:15:24.174 }, 00:15:24.174 { 00:15:24.174 "method": "bdev_malloc_create", 00:15:24.174 "params": { 00:15:24.174 "name": "malloc0", 00:15:24.174 "num_blocks": 8192, 00:15:24.174 "block_size": 4096, 00:15:24.174 "physical_block_size": 4096, 00:15:24.174 "uuid": "022e90c1-264c-4fe4-aaec-dda71f99a070", 00:15:24.174 "optimal_io_boundary": 0 00:15:24.174 } 00:15:24.174 }, 00:15:24.174 { 00:15:24.174 "method": "bdev_wait_for_examine" 00:15:24.174 } 00:15:24.174 ] 00:15:24.174 }, 00:15:24.174 { 00:15:24.174 "subsystem": "scsi", 00:15:24.174 "config": null 00:15:24.174 }, 00:15:24.174 { 00:15:24.174 "subsystem": "scheduler", 00:15:24.174 "config": [ 00:15:24.174 { 00:15:24.174 "method": "framework_set_scheduler", 00:15:24.174 "params": { 00:15:24.174 "name": "static" 00:15:24.174 } 00:15:24.174 } 00:15:24.174 ] 00:15:24.174 }, 00:15:24.174 { 00:15:24.174 "subsystem": "vhost_scsi", 00:15:24.174 "config": [] 00:15:24.174 }, 00:15:24.174 { 00:15:24.174 "subsystem": "vhost_blk", 00:15:24.174 "config": [] 00:15:24.174 }, 00:15:24.175 { 00:15:24.175 "subsystem": "ublk", 00:15:24.175 "config": [ 00:15:24.175 { 00:15:24.175 "method": "ublk_create_target", 00:15:24.175 "params": { 00:15:24.175 "cpumask": "1" 00:15:24.175 } 00:15:24.175 }, 00:15:24.175 { 00:15:24.175 "method": "ublk_start_disk", 00:15:24.175 "params": { 00:15:24.175 "bdev_name": "malloc0", 00:15:24.175 "ublk_id": 0, 00:15:24.175 "num_queues": 1, 00:15:24.175 "queue_depth": 128 00:15:24.175 } 00:15:24.175 } 00:15:24.175 ] 00:15:24.175 }, 00:15:24.175 { 00:15:24.175 "subsystem": "nbd", 00:15:24.175 "config": [] 00:15:24.175 }, 00:15:24.175 { 00:15:24.175 "subsystem": "nvmf", 00:15:24.175 "config": [ 00:15:24.175 { 00:15:24.175 "method": "nvmf_set_config", 00:15:24.175 "params": { 00:15:24.175 "discovery_filter": "match_any", 00:15:24.175 "admin_cmd_passthru": { 00:15:24.175 "identify_ctrlr": false 00:15:24.175 } 00:15:24.175 } 00:15:24.175 }, 00:15:24.175 { 00:15:24.175 "method": "nvmf_set_max_subsystems", 00:15:24.175 "params": { 00:15:24.175 "max_subsystems": 1024 00:15:24.175 } 00:15:24.175 }, 00:15:24.175 { 00:15:24.175 "method": "nvmf_set_crdt", 00:15:24.175 "params": { 00:15:24.175 "crdt1": 0, 00:15:24.175 "crdt2": 0, 00:15:24.175 "crdt3": 0 00:15:24.175 } 00:15:24.175 } 00:15:24.175 ] 00:15:24.175 }, 00:15:24.175 { 00:15:24.175 "subsystem": "iscsi", 00:15:24.175 "config": [ 00:15:24.175 { 00:15:24.175 "method": "iscsi_set_options", 00:15:24.175 "params": { 00:15:24.175 "node_base": "iqn.2016-06.io.spdk", 00:15:24.175 "max_sessions": 128, 00:15:24.175 "max_connections_per_session": 2, 00:15:24.175 "max_queue_depth": 64, 00:15:24.175 "default_time2wait": 2, 00:15:24.175 "default_time2retain": 20, 00:15:24.175 "first_burst_length": 8192, 00:15:24.175 "immediate_data": true, 00:15:24.175 "allow_duplicated_isid": false, 00:15:24.175 "error_recovery_level": 0, 00:15:24.175 "nop_timeout": 60, 00:15:24.175 "nop_in_interval": 30, 00:15:24.175 "disable_chap": false, 00:15:24.175 "require_chap": false, 00:15:24.175 "mutual_chap": false, 00:15:24.175 "chap_group": 0, 00:15:24.175 "max_large_datain_per_connection": 64, 00:15:24.175 "max_r2t_per_connection": 4, 00:15:24.175 "pdu_pool_size": 36864, 00:15:24.175 "immediate_data_pool_size": 16384, 00:15:24.175 "data_out_pool_size": 2048 00:15:24.175 } 00:15:24.175 } 00:15:24.175 ] 00:15:24.175 } 00:15:24.175 ] 00:15:24.175 }' 00:15:24.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.175 15:40:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:24.175 15:40:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.175 15:40:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:24.175 15:40:45 -- common/autotest_common.sh@10 -- # set +x 00:15:24.175 [2024-07-24 15:40:45.595739] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:24.175 [2024-07-24 15:40:45.595903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69948 ] 00:15:24.175 [2024-07-24 15:40:45.766931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.434 [2024-07-24 15:40:45.998692] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:24.434 [2024-07-24 15:40:45.998958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.370 [2024-07-24 15:40:46.854104] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:25.370 [2024-07-24 15:40:46.861239] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:25.370 [2024-07-24 15:40:46.861340] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:25.370 [2024-07-24 15:40:46.861356] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:25.370 [2024-07-24 15:40:46.861365] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:25.370 [2024-07-24 15:40:46.870185] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:25.370 [2024-07-24 15:40:46.870215] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:25.370 [2024-07-24 15:40:46.877124] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:25.370 [2024-07-24 15:40:46.877246] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:25.370 [2024-07-24 15:40:46.894112] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:25.936 15:40:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:25.936 15:40:47 -- common/autotest_common.sh@852 -- # return 0 00:15:25.936 15:40:47 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:25.936 15:40:47 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:25.936 15:40:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.936 15:40:47 -- common/autotest_common.sh@10 -- # set +x 00:15:25.936 15:40:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.936 15:40:47 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:25.936 15:40:47 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:25.936 15:40:47 -- ublk/ublk.sh@125 -- # killprocess 69948 00:15:25.936 15:40:47 -- common/autotest_common.sh@926 -- # '[' -z 69948 ']' 00:15:25.936 15:40:47 -- common/autotest_common.sh@930 -- # kill -0 69948 00:15:25.936 15:40:47 -- common/autotest_common.sh@931 -- # uname 00:15:25.936 15:40:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:25.936 15:40:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69948 00:15:25.936 15:40:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:25.936 15:40:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:25.936 killing process with pid 69948 00:15:25.936 15:40:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69948' 00:15:25.936 15:40:47 -- common/autotest_common.sh@945 -- # kill 69948 00:15:25.936 15:40:47 -- common/autotest_common.sh@950 -- # wait 69948 00:15:27.311 [2024-07-24 15:40:48.583070] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:27.311 [2024-07-24 15:40:48.627145] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:27.311 [2024-07-24 15:40:48.627373] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:27.311 [2024-07-24 15:40:48.638205] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:27.311 [2024-07-24 15:40:48.638312] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:27.311 [2024-07-24 15:40:48.638334] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:27.311 [2024-07-24 15:40:48.638397] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:27.311 [2024-07-24 15:40:48.639279] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:28.684 15:40:49 -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:28.684 00:15:28.684 real 0m9.219s 00:15:28.684 user 0m8.722s 00:15:28.684 sys 0m2.024s 00:15:28.684 15:40:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.684 ************************************ 00:15:28.684 15:40:49 -- common/autotest_common.sh@10 -- # set +x 00:15:28.684 END TEST test_save_ublk_config 00:15:28.684 ************************************ 00:15:28.684 15:40:49 -- ublk/ublk.sh@139 -- # spdk_pid=70029 00:15:28.684 15:40:49 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:28.684 15:40:49 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:28.684 15:40:49 -- ublk/ublk.sh@141 -- # waitforlisten 70029 00:15:28.684 15:40:49 -- common/autotest_common.sh@819 -- # '[' -z 70029 ']' 00:15:28.684 15:40:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.684 15:40:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:28.684 15:40:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.684 15:40:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:28.684 15:40:49 -- common/autotest_common.sh@10 -- # set +x 00:15:28.684 [2024-07-24 15:40:49.986742] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:28.684 [2024-07-24 15:40:49.986880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70029 ] 00:15:28.684 [2024-07-24 15:40:50.146149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:28.942 [2024-07-24 15:40:50.356845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:28.942 [2024-07-24 15:40:50.357354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.942 [2024-07-24 15:40:50.357359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.315 15:40:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:30.315 15:40:51 -- common/autotest_common.sh@852 -- # return 0 00:15:30.315 15:40:51 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:30.315 15:40:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:30.315 15:40:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:30.315 15:40:51 -- common/autotest_common.sh@10 -- # set +x 00:15:30.315 ************************************ 00:15:30.315 START TEST test_create_ublk 00:15:30.315 ************************************ 00:15:30.315 15:40:51 -- common/autotest_common.sh@1104 -- # test_create_ublk 00:15:30.315 15:40:51 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:30.315 15:40:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.315 15:40:51 -- common/autotest_common.sh@10 -- # set +x 00:15:30.315 [2024-07-24 15:40:51.746479] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:30.315 15:40:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.315 15:40:51 -- ublk/ublk.sh@33 -- # ublk_target= 00:15:30.315 15:40:51 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:30.315 15:40:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.315 15:40:51 -- common/autotest_common.sh@10 -- # set +x 00:15:30.573 15:40:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.573 15:40:52 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:30.573 15:40:52 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:30.573 15:40:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.573 15:40:52 -- common/autotest_common.sh@10 -- # set +x 00:15:30.573 [2024-07-24 15:40:52.016276] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:30.573 [2024-07-24 15:40:52.016754] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:30.573 [2024-07-24 15:40:52.016780] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:30.573 [2024-07-24 15:40:52.016794] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:30.573 [2024-07-24 15:40:52.025378] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:30.573 [2024-07-24 15:40:52.025413] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:30.573 [2024-07-24 15:40:52.032125] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:30.573 [2024-07-24 15:40:52.046372] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:30.573 [2024-07-24 15:40:52.058238] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:30.573 15:40:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.573 15:40:52 -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:30.573 15:40:52 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:30.573 15:40:52 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:30.573 15:40:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.573 15:40:52 -- common/autotest_common.sh@10 -- # set +x 00:15:30.573 15:40:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.573 15:40:52 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:30.573 { 00:15:30.573 "ublk_device": "/dev/ublkb0", 00:15:30.573 "id": 0, 00:15:30.573 "queue_depth": 512, 00:15:30.573 "num_queues": 4, 00:15:30.573 "bdev_name": "Malloc0" 00:15:30.573 } 00:15:30.573 ]' 00:15:30.573 15:40:52 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:30.573 15:40:52 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:30.573 15:40:52 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:30.832 15:40:52 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:30.832 15:40:52 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:30.832 15:40:52 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:30.832 15:40:52 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:30.832 15:40:52 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:30.832 15:40:52 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:30.832 15:40:52 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:30.832 15:40:52 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:30.832 15:40:52 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:30.832 15:40:52 -- lvol/common.sh@41 -- # local offset=0 00:15:30.832 15:40:52 -- lvol/common.sh@42 -- # local size=134217728 00:15:30.832 15:40:52 -- lvol/common.sh@43 -- # local rw=write 00:15:30.832 15:40:52 -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:30.832 15:40:52 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:30.832 15:40:52 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:30.832 15:40:52 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:30.832 15:40:52 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:30.832 15:40:52 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:30.832 15:40:52 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:31.091 fio: verification read phase will never start because write phase uses all of runtime 00:15:31.091 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:31.091 fio-3.35 00:15:31.091 Starting 1 process 00:15:41.058 00:15:41.058 fio_test: (groupid=0, jobs=1): err= 0: pid=70083: Wed Jul 24 15:41:02 2024 00:15:41.058 write: IOPS=9506, BW=37.1MiB/s (38.9MB/s)(371MiB/10001msec); 0 zone resets 00:15:41.059 clat (usec): min=66, max=7878, avg=102.76, stdev=170.08 00:15:41.059 lat (usec): min=68, max=7879, avg=104.00, stdev=170.11 00:15:41.059 clat percentiles (usec): 00:15:41.059 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 80], 00:15:41.059 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 89], 60.00th=[ 92], 00:15:41.059 | 70.00th=[ 95], 80.00th=[ 103], 90.00th=[ 118], 95.00th=[ 131], 00:15:41.059 | 99.00th=[ 153], 99.50th=[ 178], 99.90th=[ 3359], 99.95th=[ 3621], 00:15:41.059 | 99.99th=[ 4080] 00:15:41.059 bw ( KiB/s): min=15992, max=42232, per=99.64%, avg=37889.26, stdev=5750.70, samples=19 00:15:41.059 iops : min= 3998, max=10558, avg=9472.32, stdev=1437.68, samples=19 00:15:41.059 lat (usec) : 100=77.43%, 250=22.11%, 500=0.01%, 750=0.03%, 1000=0.03% 00:15:41.059 lat (msec) : 2=0.12%, 4=0.25%, 10=0.01% 00:15:41.059 cpu : usr=4.28%, sys=9.89%, ctx=95087, majf=0, minf=795 00:15:41.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:41.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.059 issued rwts: total=0,95079,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.059 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:41.059 00:15:41.059 Run status group 0 (all jobs): 00:15:41.059 WRITE: bw=37.1MiB/s (38.9MB/s), 37.1MiB/s-37.1MiB/s (38.9MB/s-38.9MB/s), io=371MiB (389MB), run=10001-10001msec 00:15:41.059 00:15:41.059 Disk stats (read/write): 00:15:41.059 ublkb0: ios=0/93961, merge=0/0, ticks=0/8493, in_queue=8493, util=99.08% 00:15:41.059 15:41:02 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:41.059 15:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.059 15:41:02 -- common/autotest_common.sh@10 -- # set +x 00:15:41.059 [2024-07-24 15:41:02.557709] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:41.059 [2024-07-24 15:41:02.597213] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:41.059 [2024-07-24 15:41:02.598852] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:41.059 [2024-07-24 15:41:02.605160] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:41.059 [2024-07-24 15:41:02.605518] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:41.059 [2024-07-24 15:41:02.605534] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:41.059 15:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.059 15:41:02 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:41.059 15:41:02 -- common/autotest_common.sh@640 -- # local es=0 00:15:41.059 15:41:02 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:41.059 15:41:02 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:15:41.059 15:41:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:41.059 15:41:02 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:15:41.059 15:41:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:41.059 15:41:02 -- common/autotest_common.sh@643 -- # rpc_cmd ublk_stop_disk 0 00:15:41.059 15:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.059 15:41:02 -- common/autotest_common.sh@10 -- # set +x 00:15:41.059 [2024-07-24 15:41:02.621282] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:41.059 request: 00:15:41.059 { 00:15:41.059 "ublk_id": 0, 00:15:41.059 "method": "ublk_stop_disk", 00:15:41.059 "req_id": 1 00:15:41.059 } 00:15:41.059 Got JSON-RPC error response 00:15:41.059 response: 00:15:41.059 { 00:15:41.059 "code": -19, 00:15:41.059 "message": "No such device" 00:15:41.059 } 00:15:41.059 15:41:02 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:41.059 15:41:02 -- common/autotest_common.sh@643 -- # es=1 00:15:41.059 15:41:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:41.059 15:41:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:41.059 15:41:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:41.059 15:41:02 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:41.059 15:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.059 15:41:02 -- common/autotest_common.sh@10 -- # set +x 00:15:41.059 [2024-07-24 15:41:02.637244] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:41.059 [2024-07-24 15:41:02.644135] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:41.059 [2024-07-24 15:41:02.644191] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:41.059 15:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.059 15:41:02 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:41.059 15:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.059 15:41:02 -- common/autotest_common.sh@10 -- # set +x 00:15:41.622 15:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.622 15:41:02 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:41.622 15:41:02 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:41.622 15:41:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.622 15:41:02 -- common/autotest_common.sh@10 -- # set +x 00:15:41.622 15:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.622 15:41:02 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:41.622 15:41:02 -- lvol/common.sh@26 -- # jq length 00:15:41.622 15:41:03 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:41.622 15:41:03 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:41.622 15:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.622 15:41:03 -- common/autotest_common.sh@10 -- # set +x 00:15:41.622 15:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.622 15:41:03 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:41.622 15:41:03 -- lvol/common.sh@28 -- # jq length 00:15:41.622 ************************************ 00:15:41.622 END TEST test_create_ublk 00:15:41.622 ************************************ 00:15:41.622 15:41:03 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:41.622 00:15:41.622 real 0m11.342s 00:15:41.622 user 0m0.851s 00:15:41.622 sys 0m1.081s 00:15:41.622 15:41:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:41.622 15:41:03 -- common/autotest_common.sh@10 -- # set +x 00:15:41.622 15:41:03 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:41.622 15:41:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:41.622 15:41:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:41.622 15:41:03 -- common/autotest_common.sh@10 -- # set +x 00:15:41.622 ************************************ 00:15:41.622 START TEST test_create_multi_ublk 00:15:41.622 ************************************ 00:15:41.622 15:41:03 -- common/autotest_common.sh@1104 -- # test_create_multi_ublk 00:15:41.622 15:41:03 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:41.622 15:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.622 15:41:03 -- common/autotest_common.sh@10 -- # set +x 00:15:41.622 [2024-07-24 15:41:03.131367] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:41.622 15:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.622 15:41:03 -- ublk/ublk.sh@62 -- # ublk_target= 00:15:41.622 15:41:03 -- ublk/ublk.sh@64 -- # seq 0 3 00:15:41.622 15:41:03 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:41.622 15:41:03 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:41.622 15:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.622 15:41:03 -- common/autotest_common.sh@10 -- # set +x 00:15:41.878 15:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.878 15:41:03 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:41.878 15:41:03 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:41.878 15:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.878 15:41:03 -- common/autotest_common.sh@10 -- # set +x 00:15:41.878 [2024-07-24 15:41:03.369363] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:41.878 [2024-07-24 15:41:03.370063] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:41.878 [2024-07-24 15:41:03.370120] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:41.878 [2024-07-24 15:41:03.370147] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:41.878 [2024-07-24 15:41:03.378341] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:41.878 [2024-07-24 15:41:03.378411] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:41.878 [2024-07-24 15:41:03.385130] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:41.878 [2024-07-24 15:41:03.386012] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:41.878 [2024-07-24 15:41:03.401204] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:41.878 15:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.878 15:41:03 -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:41.878 15:41:03 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:41.878 15:41:03 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:41.878 15:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.878 15:41:03 -- common/autotest_common.sh@10 -- # set +x 00:15:42.443 15:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.443 15:41:03 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:42.443 15:41:03 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:42.443 15:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.443 15:41:03 -- common/autotest_common.sh@10 -- # set +x 00:15:42.443 [2024-07-24 15:41:03.753364] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:42.443 [2024-07-24 15:41:03.753898] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:42.443 [2024-07-24 15:41:03.753937] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:42.443 [2024-07-24 15:41:03.753948] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:42.443 [2024-07-24 15:41:03.761152] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:42.443 [2024-07-24 15:41:03.761187] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:42.443 [2024-07-24 15:41:03.769142] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:42.443 [2024-07-24 15:41:03.769916] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:42.443 [2024-07-24 15:41:03.782126] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:42.443 15:41:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.443 15:41:03 -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:42.443 15:41:03 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:42.443 15:41:03 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:42.443 15:41:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.443 15:41:03 -- common/autotest_common.sh@10 -- # set +x 00:15:42.443 15:41:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.443 15:41:04 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:42.443 15:41:04 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:42.443 15:41:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.443 15:41:04 -- common/autotest_common.sh@10 -- # set +x 00:15:42.443 [2024-07-24 15:41:04.037371] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:42.443 [2024-07-24 15:41:04.037882] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:42.443 [2024-07-24 15:41:04.037905] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:42.443 [2024-07-24 15:41:04.037921] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:42.717 [2024-07-24 15:41:04.045132] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:42.717 [2024-07-24 15:41:04.045176] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:42.717 [2024-07-24 15:41:04.053145] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:42.717 [2024-07-24 15:41:04.053943] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:42.717 [2024-07-24 15:41:04.066134] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:42.717 15:41:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.717 15:41:04 -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:42.717 15:41:04 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:42.717 15:41:04 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:42.717 15:41:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.717 15:41:04 -- common/autotest_common.sh@10 -- # set +x 00:15:42.985 15:41:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.985 15:41:04 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:42.985 15:41:04 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:42.985 15:41:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.985 15:41:04 -- common/autotest_common.sh@10 -- # set +x 00:15:42.985 [2024-07-24 15:41:04.368288] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:42.985 [2024-07-24 15:41:04.368787] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:42.985 [2024-07-24 15:41:04.368815] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:42.985 [2024-07-24 15:41:04.368827] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:42.985 [2024-07-24 15:41:04.376165] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:42.985 [2024-07-24 15:41:04.376202] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:42.985 [2024-07-24 15:41:04.384131] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:42.985 [2024-07-24 15:41:04.384935] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:42.985 [2024-07-24 15:41:04.389224] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:42.985 15:41:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.985 15:41:04 -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:42.985 15:41:04 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:42.985 15:41:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.985 15:41:04 -- common/autotest_common.sh@10 -- # set +x 00:15:42.985 15:41:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.985 15:41:04 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:42.985 { 00:15:42.985 "ublk_device": "/dev/ublkb0", 00:15:42.985 "id": 0, 00:15:42.985 "queue_depth": 512, 00:15:42.985 "num_queues": 4, 00:15:42.985 "bdev_name": "Malloc0" 00:15:42.985 }, 00:15:42.985 { 00:15:42.985 "ublk_device": "/dev/ublkb1", 00:15:42.985 "id": 1, 00:15:42.985 "queue_depth": 512, 00:15:42.985 "num_queues": 4, 00:15:42.985 "bdev_name": "Malloc1" 00:15:42.985 }, 00:15:42.985 { 00:15:42.985 "ublk_device": "/dev/ublkb2", 00:15:42.985 "id": 2, 00:15:42.985 "queue_depth": 512, 00:15:42.985 "num_queues": 4, 00:15:42.985 "bdev_name": "Malloc2" 00:15:42.985 }, 00:15:42.985 { 00:15:42.985 "ublk_device": "/dev/ublkb3", 00:15:42.985 "id": 3, 00:15:42.985 "queue_depth": 512, 00:15:42.985 "num_queues": 4, 00:15:42.985 "bdev_name": "Malloc3" 00:15:42.985 } 00:15:42.985 ]' 00:15:42.985 15:41:04 -- ublk/ublk.sh@72 -- # seq 0 3 00:15:42.985 15:41:04 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:42.985 15:41:04 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:42.985 15:41:04 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:42.986 15:41:04 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:42.986 15:41:04 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:42.986 15:41:04 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:43.247 15:41:04 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:43.247 15:41:04 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:43.247 15:41:04 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:43.247 15:41:04 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:43.247 15:41:04 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:43.247 15:41:04 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:43.247 15:41:04 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:43.247 15:41:04 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:43.247 15:41:04 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:43.247 15:41:04 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:43.247 15:41:04 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:43.504 15:41:04 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:43.504 15:41:04 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:43.504 15:41:04 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:43.504 15:41:04 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:43.504 15:41:04 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:43.504 15:41:04 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:43.504 15:41:04 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:43.504 15:41:05 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:43.504 15:41:05 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:43.504 15:41:05 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:43.761 15:41:05 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:43.761 15:41:05 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:43.761 15:41:05 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:43.761 15:41:05 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:43.761 15:41:05 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:43.761 15:41:05 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:43.761 15:41:05 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:43.762 15:41:05 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:43.762 15:41:05 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:43.762 15:41:05 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:44.018 15:41:05 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:44.018 15:41:05 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:44.018 15:41:05 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:44.018 15:41:05 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:44.018 15:41:05 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:44.018 15:41:05 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:44.018 15:41:05 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:44.018 15:41:05 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:44.018 15:41:05 -- ublk/ublk.sh@85 -- # seq 0 3 00:15:44.018 15:41:05 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:44.018 15:41:05 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:44.018 15:41:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:44.018 15:41:05 -- common/autotest_common.sh@10 -- # set +x 00:15:44.018 [2024-07-24 15:41:05.545559] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:44.018 [2024-07-24 15:41:05.592612] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:44.018 [2024-07-24 15:41:05.597439] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:44.018 [2024-07-24 15:41:05.606305] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:44.018 [2024-07-24 15:41:05.606751] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:44.018 [2024-07-24 15:41:05.606781] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:44.018 15:41:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:44.018 15:41:05 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:44.018 15:41:05 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:44.018 15:41:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:44.018 15:41:05 -- common/autotest_common.sh@10 -- # set +x 00:15:44.018 [2024-07-24 15:41:05.614279] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:44.275 [2024-07-24 15:41:05.652216] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:44.275 [2024-07-24 15:41:05.654065] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:44.275 [2024-07-24 15:41:05.661133] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:44.275 [2024-07-24 15:41:05.661557] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:44.275 [2024-07-24 15:41:05.661594] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:44.275 15:41:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:44.275 15:41:05 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:44.275 15:41:05 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:44.275 15:41:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:44.275 15:41:05 -- common/autotest_common.sh@10 -- # set +x 00:15:44.275 [2024-07-24 15:41:05.669242] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:44.275 [2024-07-24 15:41:05.709142] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:44.275 [2024-07-24 15:41:05.714449] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:44.275 [2024-07-24 15:41:05.723181] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:44.275 [2024-07-24 15:41:05.723653] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:44.275 [2024-07-24 15:41:05.723694] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:44.275 15:41:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:44.275 15:41:05 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:44.275 15:41:05 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:44.275 15:41:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:44.275 15:41:05 -- common/autotest_common.sh@10 -- # set +x 00:15:44.275 [2024-07-24 15:41:05.731425] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:44.275 [2024-07-24 15:41:05.776598] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:44.275 [2024-07-24 15:41:05.778146] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:44.275 [2024-07-24 15:41:05.784170] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:44.275 [2024-07-24 15:41:05.784587] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:44.275 [2024-07-24 15:41:05.784644] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:44.275 15:41:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:44.275 15:41:05 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:44.533 [2024-07-24 15:41:06.080281] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:44.533 [2024-07-24 15:41:06.086256] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:44.533 [2024-07-24 15:41:06.086338] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:44.533 15:41:06 -- ublk/ublk.sh@93 -- # seq 0 3 00:15:44.533 15:41:06 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:44.533 15:41:06 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:44.533 15:41:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:44.533 15:41:06 -- common/autotest_common.sh@10 -- # set +x 00:15:45.098 15:41:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:45.098 15:41:06 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:45.098 15:41:06 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:45.098 15:41:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:45.098 15:41:06 -- common/autotest_common.sh@10 -- # set +x 00:15:45.356 15:41:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:45.356 15:41:06 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:45.356 15:41:06 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:45.356 15:41:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:45.356 15:41:06 -- common/autotest_common.sh@10 -- # set +x 00:15:45.613 15:41:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:45.613 15:41:07 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:45.613 15:41:07 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:45.613 15:41:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:45.613 15:41:07 -- common/autotest_common.sh@10 -- # set +x 00:15:46.180 15:41:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.180 15:41:07 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:46.180 15:41:07 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:46.180 15:41:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.180 15:41:07 -- common/autotest_common.sh@10 -- # set +x 00:15:46.180 15:41:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.180 15:41:07 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:46.180 15:41:07 -- lvol/common.sh@26 -- # jq length 00:15:46.180 15:41:07 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:46.180 15:41:07 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:46.180 15:41:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.180 15:41:07 -- common/autotest_common.sh@10 -- # set +x 00:15:46.180 15:41:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.180 15:41:07 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:46.180 15:41:07 -- lvol/common.sh@28 -- # jq length 00:15:46.180 ************************************ 00:15:46.180 END TEST test_create_multi_ublk 00:15:46.180 ************************************ 00:15:46.180 15:41:07 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:46.180 00:15:46.180 real 0m4.527s 00:15:46.180 user 0m1.424s 00:15:46.180 sys 0m0.152s 00:15:46.180 15:41:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.180 15:41:07 -- common/autotest_common.sh@10 -- # set +x 00:15:46.180 15:41:07 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:46.180 15:41:07 -- ublk/ublk.sh@147 -- # cleanup 00:15:46.180 15:41:07 -- ublk/ublk.sh@130 -- # killprocess 70029 00:15:46.180 15:41:07 -- common/autotest_common.sh@926 -- # '[' -z 70029 ']' 00:15:46.180 15:41:07 -- common/autotest_common.sh@930 -- # kill -0 70029 00:15:46.180 15:41:07 -- common/autotest_common.sh@931 -- # uname 00:15:46.180 15:41:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:46.180 15:41:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70029 00:15:46.180 killing process with pid 70029 00:15:46.180 15:41:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:46.180 15:41:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:46.180 15:41:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70029' 00:15:46.180 15:41:07 -- common/autotest_common.sh@945 -- # kill 70029 00:15:46.180 15:41:07 -- common/autotest_common.sh@950 -- # wait 70029 00:15:47.554 [2024-07-24 15:41:08.714942] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:47.554 [2024-07-24 15:41:08.715027] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:48.487 00:15:48.487 real 0m29.325s 00:15:48.487 user 0m45.980s 00:15:48.487 sys 0m7.969s 00:15:48.487 15:41:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:48.487 ************************************ 00:15:48.487 END TEST ublk 00:15:48.487 ************************************ 00:15:48.487 15:41:09 -- common/autotest_common.sh@10 -- # set +x 00:15:48.487 15:41:09 -- spdk/autotest.sh@260 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:48.487 15:41:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:48.488 15:41:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:48.488 15:41:09 -- common/autotest_common.sh@10 -- # set +x 00:15:48.488 ************************************ 00:15:48.488 START TEST ublk_recovery 00:15:48.488 ************************************ 00:15:48.488 15:41:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:48.488 * Looking for test storage... 00:15:48.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:48.488 15:41:09 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:48.488 15:41:09 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:48.488 15:41:09 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:48.488 15:41:09 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:48.488 15:41:09 -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:48.488 15:41:09 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:48.488 15:41:09 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:48.488 15:41:09 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:48.488 15:41:09 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:48.488 15:41:09 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:48.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.488 15:41:09 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=70426 00:15:48.488 15:41:09 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:48.488 15:41:09 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:48.488 15:41:09 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 70426 00:15:48.488 15:41:09 -- common/autotest_common.sh@819 -- # '[' -z 70426 ']' 00:15:48.488 15:41:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.488 15:41:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:48.488 15:41:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.488 15:41:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:48.488 15:41:09 -- common/autotest_common.sh@10 -- # set +x 00:15:48.746 [2024-07-24 15:41:10.111982] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:48.746 [2024-07-24 15:41:10.112226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70426 ] 00:15:48.746 [2024-07-24 15:41:10.294221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:49.004 [2024-07-24 15:41:10.510399] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:49.004 [2024-07-24 15:41:10.510761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.004 [2024-07-24 15:41:10.510898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.376 15:41:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:50.376 15:41:11 -- common/autotest_common.sh@852 -- # return 0 00:15:50.376 15:41:11 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:50.376 15:41:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:50.376 15:41:11 -- common/autotest_common.sh@10 -- # set +x 00:15:50.376 [2024-07-24 15:41:11.836552] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:50.376 15:41:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.376 15:41:11 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:50.376 15:41:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:50.376 15:41:11 -- common/autotest_common.sh@10 -- # set +x 00:15:50.376 malloc0 00:15:50.376 15:41:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.376 15:41:11 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:50.376 15:41:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:50.376 15:41:11 -- common/autotest_common.sh@10 -- # set +x 00:15:50.376 [2024-07-24 15:41:11.970353] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:50.376 [2024-07-24 15:41:11.970538] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:50.376 [2024-07-24 15:41:11.970559] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:50.376 [2024-07-24 15:41:11.970573] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:50.633 [2024-07-24 15:41:11.978396] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:50.633 [2024-07-24 15:41:11.978480] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:50.633 [2024-07-24 15:41:11.986169] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:50.633 [2024-07-24 15:41:11.986433] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:50.633 [2024-07-24 15:41:12.009185] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:50.633 1 00:15:50.633 15:41:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.633 15:41:12 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:51.567 15:41:13 -- ublk/ublk_recovery.sh@31 -- # fio_proc=70474 00:15:51.567 15:41:13 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:51.567 15:41:13 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:51.567 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:51.567 fio-3.35 00:15:51.567 Starting 1 process 00:15:56.840 15:41:18 -- ublk/ublk_recovery.sh@36 -- # kill -9 70426 00:15:56.840 15:41:18 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:02.101 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 70426 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:02.101 15:41:23 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=70581 00:16:02.101 15:41:23 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:02.101 15:41:23 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:02.101 15:41:23 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 70581 00:16:02.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.101 15:41:23 -- common/autotest_common.sh@819 -- # '[' -z 70581 ']' 00:16:02.101 15:41:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.101 15:41:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:02.101 15:41:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.101 15:41:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:02.101 15:41:23 -- common/autotest_common.sh@10 -- # set +x 00:16:02.101 [2024-07-24 15:41:23.170229] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:02.101 [2024-07-24 15:41:23.170463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70581 ] 00:16:02.101 [2024-07-24 15:41:23.356330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:02.101 [2024-07-24 15:41:23.662359] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:02.101 [2024-07-24 15:41:23.663042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.101 [2024-07-24 15:41:23.663047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.998 15:41:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:03.998 15:41:25 -- common/autotest_common.sh@852 -- # return 0 00:16:03.998 15:41:25 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:03.998 15:41:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.998 15:41:25 -- common/autotest_common.sh@10 -- # set +x 00:16:03.998 [2024-07-24 15:41:25.085016] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:03.998 15:41:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.998 15:41:25 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:03.998 15:41:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.998 15:41:25 -- common/autotest_common.sh@10 -- # set +x 00:16:03.998 malloc0 00:16:03.998 15:41:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.998 15:41:25 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:03.998 15:41:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.998 15:41:25 -- common/autotest_common.sh@10 -- # set +x 00:16:03.998 [2024-07-24 15:41:25.271441] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:03.998 [2024-07-24 15:41:25.271549] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:03.998 [2024-07-24 15:41:25.271575] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:03.998 1 00:16:03.998 15:41:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.998 15:41:25 -- ublk/ublk_recovery.sh@52 -- # wait 70474 00:16:03.998 [2024-07-24 15:41:25.281173] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:03.998 [2024-07-24 15:41:25.281248] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:03.998 [2024-07-24 15:41:25.281375] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:30.525 [2024-07-24 15:41:48.276952] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:30.525 [2024-07-24 15:41:48.283496] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:30.525 [2024-07-24 15:41:48.289377] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:30.525 [2024-07-24 15:41:48.289443] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:52.454 00:16:52.454 fio_test: (groupid=0, jobs=1): err= 0: pid=70477: Wed Jul 24 15:42:13 2024 00:16:52.454 read: IOPS=8655, BW=33.8MiB/s (35.5MB/s)(2029MiB/60002msec) 00:16:52.454 slat (nsec): min=1904, max=665270, avg=7041.75, stdev=3078.15 00:16:52.454 clat (usec): min=1156, max=30278k, avg=7273.56, stdev=333390.56 00:16:52.454 lat (usec): min=1186, max=30278k, avg=7280.60, stdev=333390.56 00:16:52.454 clat percentiles (msec): 00:16:52.454 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:16:52.454 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:16:52.454 | 70.00th=[ 4], 80.00th=[ 5], 90.00th=[ 5], 95.00th=[ 6], 00:16:52.454 | 99.00th=[ 9], 99.50th=[ 9], 99.90th=[ 14], 99.95th=[ 15], 00:16:52.454 | 99.99th=[17113] 00:16:52.454 bw ( KiB/s): min=30624, max=81240, per=100.00%, avg=69225.36, stdev=10428.78, samples=59 00:16:52.454 iops : min= 7656, max=20310, avg=17306.32, stdev=2607.19, samples=59 00:16:52.454 write: IOPS=8641, BW=33.8MiB/s (35.4MB/s)(2025MiB/60002msec); 0 zone resets 00:16:52.454 slat (nsec): min=1954, max=1371.3k, avg=7095.85, stdev=3707.80 00:16:52.454 clat (usec): min=796, max=30278k, avg=7510.56, stdev=338914.48 00:16:52.454 lat (usec): min=818, max=30278k, avg=7517.66, stdev=338914.47 00:16:52.454 clat percentiles (msec): 00:16:52.454 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 4], 20.00th=[ 4], 00:16:52.454 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:16:52.454 | 70.00th=[ 4], 80.00th=[ 5], 90.00th=[ 5], 95.00th=[ 6], 00:16:52.454 | 99.00th=[ 9], 99.50th=[ 9], 99.90th=[ 14], 99.95th=[ 15], 00:16:52.454 | 99.99th=[17113] 00:16:52.454 bw ( KiB/s): min=30728, max=80544, per=100.00%, avg=69130.07, stdev=10377.40, samples=59 00:16:52.454 iops : min= 7682, max=20136, avg=17282.51, stdev=2594.35, samples=59 00:16:52.454 lat (usec) : 1000=0.01% 00:16:52.454 lat (msec) : 2=0.05%, 4=73.27%, 10=26.50%, 20=0.16%, 50=0.01% 00:16:52.454 lat (msec) : >=2000=0.01% 00:16:52.454 cpu : usr=5.05%, sys=11.70%, ctx=35736, majf=0, minf=14 00:16:52.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:52.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:52.454 issued rwts: total=519338,518504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.454 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:52.454 00:16:52.454 Run status group 0 (all jobs): 00:16:52.454 READ: bw=33.8MiB/s (35.5MB/s), 33.8MiB/s-33.8MiB/s (35.5MB/s-35.5MB/s), io=2029MiB (2127MB), run=60002-60002msec 00:16:52.454 WRITE: bw=33.8MiB/s (35.4MB/s), 33.8MiB/s-33.8MiB/s (35.4MB/s-35.4MB/s), io=2025MiB (2124MB), run=60002-60002msec 00:16:52.454 00:16:52.454 Disk stats (read/write): 00:16:52.454 ublkb1: ios=517199/516410, merge=0/0, ticks=3722903/3776978, in_queue=7499881, util=99.91% 00:16:52.454 15:42:13 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:52.454 15:42:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:52.454 15:42:13 -- common/autotest_common.sh@10 -- # set +x 00:16:52.454 [2024-07-24 15:42:13.280421] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:52.454 [2024-07-24 15:42:13.331249] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:52.454 [2024-07-24 15:42:13.331683] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:52.454 [2024-07-24 15:42:13.347208] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:52.454 [2024-07-24 15:42:13.347386] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:52.454 [2024-07-24 15:42:13.347406] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:52.454 15:42:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:52.454 15:42:13 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:52.454 15:42:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:52.454 15:42:13 -- common/autotest_common.sh@10 -- # set +x 00:16:52.454 [2024-07-24 15:42:13.363279] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:52.454 [2024-07-24 15:42:13.371128] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:52.454 [2024-07-24 15:42:13.371204] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:52.454 15:42:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:52.454 15:42:13 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:52.454 15:42:13 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:52.454 15:42:13 -- ublk/ublk_recovery.sh@14 -- # killprocess 70581 00:16:52.454 15:42:13 -- common/autotest_common.sh@926 -- # '[' -z 70581 ']' 00:16:52.454 15:42:13 -- common/autotest_common.sh@930 -- # kill -0 70581 00:16:52.454 15:42:13 -- common/autotest_common.sh@931 -- # uname 00:16:52.454 15:42:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:52.454 15:42:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70581 00:16:52.454 killing process with pid 70581 00:16:52.454 15:42:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:52.454 15:42:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:52.454 15:42:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70581' 00:16:52.454 15:42:13 -- common/autotest_common.sh@945 -- # kill 70581 00:16:52.454 15:42:13 -- common/autotest_common.sh@950 -- # wait 70581 00:16:53.401 [2024-07-24 15:42:14.789836] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:53.401 [2024-07-24 15:42:14.789925] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:54.775 ************************************ 00:16:54.775 END TEST ublk_recovery 00:16:54.775 ************************************ 00:16:54.775 00:16:54.775 real 1m6.266s 00:16:54.775 user 1m53.185s 00:16:54.775 sys 0m18.705s 00:16:54.775 15:42:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:54.775 15:42:16 -- common/autotest_common.sh@10 -- # set +x 00:16:54.775 15:42:16 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:16:54.775 15:42:16 -- spdk/autotest.sh@268 -- # timing_exit lib 00:16:54.775 15:42:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:54.775 15:42:16 -- common/autotest_common.sh@10 -- # set +x 00:16:54.775 15:42:16 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:16:54.775 15:42:16 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:16:54.775 15:42:16 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:16:54.775 15:42:16 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:54.775 15:42:16 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:54.775 15:42:16 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:16:54.775 15:42:16 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:16:54.775 15:42:16 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:16:54.775 15:42:16 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:16:54.775 15:42:16 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:16:54.775 15:42:16 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:54.775 15:42:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:54.775 15:42:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:54.775 15:42:16 -- common/autotest_common.sh@10 -- # set +x 00:16:54.775 ************************************ 00:16:54.775 START TEST ftl 00:16:54.775 ************************************ 00:16:54.775 15:42:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:54.775 * Looking for test storage... 00:16:54.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:54.776 15:42:16 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:54.776 15:42:16 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:54.776 15:42:16 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:54.776 15:42:16 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:54.776 15:42:16 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:54.776 15:42:16 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:54.776 15:42:16 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:54.776 15:42:16 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:54.776 15:42:16 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:54.776 15:42:16 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:54.776 15:42:16 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:54.776 15:42:16 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:54.776 15:42:16 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:54.776 15:42:16 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:54.776 15:42:16 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:54.776 15:42:16 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:54.776 15:42:16 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:54.776 15:42:16 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:54.776 15:42:16 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:54.776 15:42:16 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:54.776 15:42:16 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:54.776 15:42:16 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:54.776 15:42:16 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:54.776 15:42:16 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:54.776 15:42:16 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:54.776 15:42:16 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:54.776 15:42:16 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:54.776 15:42:16 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:54.776 15:42:16 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:54.776 15:42:16 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:54.776 15:42:16 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:54.776 15:42:16 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:54.776 15:42:16 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:54.776 15:42:16 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:54.776 15:42:16 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:55.341 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:55.341 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:55.341 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:55.341 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:55.341 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:55.341 15:42:16 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=71379 00:16:55.341 15:42:16 -- ftl/ftl.sh@38 -- # waitforlisten 71379 00:16:55.341 15:42:16 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:55.341 15:42:16 -- common/autotest_common.sh@819 -- # '[' -z 71379 ']' 00:16:55.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.341 15:42:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.341 15:42:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:55.341 15:42:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.341 15:42:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:55.341 15:42:16 -- common/autotest_common.sh@10 -- # set +x 00:16:55.341 [2024-07-24 15:42:16.937345] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:55.341 [2024-07-24 15:42:16.937548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71379 ] 00:16:55.599 [2024-07-24 15:42:17.118990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.858 [2024-07-24 15:42:17.331432] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:55.858 [2024-07-24 15:42:17.331668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.797 15:42:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:56.797 15:42:18 -- common/autotest_common.sh@852 -- # return 0 00:16:56.797 15:42:18 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:56.797 15:42:18 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:58.170 15:42:19 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:58.170 15:42:19 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:58.428 15:42:19 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:58.428 15:42:19 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:58.428 15:42:19 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:58.687 15:42:20 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:16:58.687 15:42:20 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:58.687 15:42:20 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:16:58.687 15:42:20 -- ftl/ftl.sh@50 -- # break 00:16:58.687 15:42:20 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:16:58.687 15:42:20 -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:58.687 15:42:20 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:58.687 15:42:20 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:58.946 15:42:20 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:16:58.946 15:42:20 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:58.946 15:42:20 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:16:58.946 15:42:20 -- ftl/ftl.sh@63 -- # break 00:16:58.946 15:42:20 -- ftl/ftl.sh@66 -- # killprocess 71379 00:16:58.946 15:42:20 -- common/autotest_common.sh@926 -- # '[' -z 71379 ']' 00:16:58.946 15:42:20 -- common/autotest_common.sh@930 -- # kill -0 71379 00:16:58.946 15:42:20 -- common/autotest_common.sh@931 -- # uname 00:16:58.946 15:42:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:58.946 15:42:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71379 00:16:58.946 killing process with pid 71379 00:16:58.946 15:42:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:58.946 15:42:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:58.946 15:42:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71379' 00:16:58.946 15:42:20 -- common/autotest_common.sh@945 -- # kill 71379 00:16:58.946 15:42:20 -- common/autotest_common.sh@950 -- # wait 71379 00:17:01.476 15:42:22 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:17:01.476 15:42:22 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:17:01.476 15:42:22 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:17:01.476 15:42:22 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:01.476 15:42:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:01.476 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:17:01.476 ************************************ 00:17:01.476 START TEST ftl_fio_basic 00:17:01.476 ************************************ 00:17:01.476 15:42:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:17:01.476 * Looking for test storage... 00:17:01.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:01.476 15:42:22 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:01.476 15:42:22 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:01.476 15:42:22 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:01.476 15:42:22 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:01.476 15:42:22 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:01.476 15:42:22 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:01.476 15:42:22 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:01.476 15:42:22 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:01.476 15:42:22 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:01.476 15:42:22 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:01.476 15:42:22 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:01.476 15:42:22 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:01.476 15:42:22 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:01.476 15:42:22 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:01.476 15:42:22 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:01.476 15:42:22 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:01.476 15:42:22 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:01.476 15:42:22 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:01.476 15:42:22 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:01.476 15:42:22 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:01.476 15:42:22 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:01.476 15:42:22 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:01.476 15:42:22 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:01.476 15:42:22 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:01.476 15:42:22 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:01.476 15:42:22 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:01.476 15:42:22 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:01.476 15:42:22 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:01.476 15:42:22 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:01.476 15:42:22 -- ftl/fio.sh@11 -- # declare -A suite 00:17:01.476 15:42:22 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:01.476 15:42:22 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:01.476 15:42:22 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:01.476 15:42:22 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:01.476 15:42:22 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:17:01.476 15:42:22 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:17:01.476 15:42:22 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:01.476 15:42:22 -- ftl/fio.sh@26 -- # uuid= 00:17:01.476 15:42:22 -- ftl/fio.sh@27 -- # timeout=240 00:17:01.476 15:42:22 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:01.476 15:42:22 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:01.476 15:42:22 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:01.476 15:42:22 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:01.476 15:42:22 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:01.476 15:42:22 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:01.476 15:42:22 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:01.476 15:42:22 -- ftl/fio.sh@45 -- # svcpid=71513 00:17:01.476 15:42:22 -- ftl/fio.sh@46 -- # waitforlisten 71513 00:17:01.476 15:42:22 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:01.476 15:42:22 -- common/autotest_common.sh@819 -- # '[' -z 71513 ']' 00:17:01.476 15:42:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.476 15:42:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:01.476 15:42:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.476 15:42:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:01.476 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:17:01.476 [2024-07-24 15:42:22.808760] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:01.476 [2024-07-24 15:42:22.809364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71513 ] 00:17:01.476 [2024-07-24 15:42:23.016414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:01.734 [2024-07-24 15:42:23.302759] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:01.734 [2024-07-24 15:42:23.303151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.734 [2024-07-24 15:42:23.303451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.734 [2024-07-24 15:42:23.303456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.104 15:42:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:03.104 15:42:24 -- common/autotest_common.sh@852 -- # return 0 00:17:03.104 15:42:24 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:03.104 15:42:24 -- ftl/common.sh@54 -- # local name=nvme0 00:17:03.104 15:42:24 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:03.104 15:42:24 -- ftl/common.sh@56 -- # local size=103424 00:17:03.104 15:42:24 -- ftl/common.sh@59 -- # local base_bdev 00:17:03.104 15:42:24 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:03.669 15:42:25 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:03.669 15:42:25 -- ftl/common.sh@62 -- # local base_size 00:17:03.669 15:42:25 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:03.669 15:42:25 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:17:03.669 15:42:25 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:03.669 15:42:25 -- common/autotest_common.sh@1359 -- # local bs 00:17:03.669 15:42:25 -- common/autotest_common.sh@1360 -- # local nb 00:17:03.669 15:42:25 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:03.926 15:42:25 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:03.926 { 00:17:03.926 "name": "nvme0n1", 00:17:03.926 "aliases": [ 00:17:03.926 "a419034e-616d-4e4d-91ac-b6e543b5fc6e" 00:17:03.926 ], 00:17:03.926 "product_name": "NVMe disk", 00:17:03.926 "block_size": 4096, 00:17:03.926 "num_blocks": 1310720, 00:17:03.926 "uuid": "a419034e-616d-4e4d-91ac-b6e543b5fc6e", 00:17:03.926 "assigned_rate_limits": { 00:17:03.926 "rw_ios_per_sec": 0, 00:17:03.926 "rw_mbytes_per_sec": 0, 00:17:03.926 "r_mbytes_per_sec": 0, 00:17:03.926 "w_mbytes_per_sec": 0 00:17:03.926 }, 00:17:03.926 "claimed": false, 00:17:03.926 "zoned": false, 00:17:03.926 "supported_io_types": { 00:17:03.926 "read": true, 00:17:03.926 "write": true, 00:17:03.926 "unmap": true, 00:17:03.926 "write_zeroes": true, 00:17:03.926 "flush": true, 00:17:03.926 "reset": true, 00:17:03.926 "compare": true, 00:17:03.926 "compare_and_write": false, 00:17:03.926 "abort": true, 00:17:03.926 "nvme_admin": true, 00:17:03.926 "nvme_io": true 00:17:03.926 }, 00:17:03.926 "driver_specific": { 00:17:03.926 "nvme": [ 00:17:03.926 { 00:17:03.926 "pci_address": "0000:00:07.0", 00:17:03.926 "trid": { 00:17:03.926 "trtype": "PCIe", 00:17:03.926 "traddr": "0000:00:07.0" 00:17:03.926 }, 00:17:03.926 "ctrlr_data": { 00:17:03.926 "cntlid": 0, 00:17:03.926 "vendor_id": "0x1b36", 00:17:03.926 "model_number": "QEMU NVMe Ctrl", 00:17:03.926 "serial_number": "12341", 00:17:03.926 "firmware_revision": "8.0.0", 00:17:03.926 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:03.926 "oacs": { 00:17:03.926 "security": 0, 00:17:03.926 "format": 1, 00:17:03.926 "firmware": 0, 00:17:03.926 "ns_manage": 1 00:17:03.926 }, 00:17:03.926 "multi_ctrlr": false, 00:17:03.926 "ana_reporting": false 00:17:03.926 }, 00:17:03.926 "vs": { 00:17:03.926 "nvme_version": "1.4" 00:17:03.926 }, 00:17:03.926 "ns_data": { 00:17:03.926 "id": 1, 00:17:03.926 "can_share": false 00:17:03.926 } 00:17:03.926 } 00:17:03.926 ], 00:17:03.927 "mp_policy": "active_passive" 00:17:03.927 } 00:17:03.927 } 00:17:03.927 ]' 00:17:03.927 15:42:25 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:03.927 15:42:25 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:03.927 15:42:25 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:03.927 15:42:25 -- common/autotest_common.sh@1363 -- # nb=1310720 00:17:03.927 15:42:25 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:17:03.927 15:42:25 -- common/autotest_common.sh@1367 -- # echo 5120 00:17:03.927 15:42:25 -- ftl/common.sh@63 -- # base_size=5120 00:17:03.927 15:42:25 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:03.927 15:42:25 -- ftl/common.sh@67 -- # clear_lvols 00:17:03.927 15:42:25 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:03.927 15:42:25 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:04.495 15:42:25 -- ftl/common.sh@28 -- # stores= 00:17:04.495 15:42:25 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:04.753 15:42:26 -- ftl/common.sh@68 -- # lvs=b3f8fcd2-dbf5-4651-9dfa-eb7d100f4993 00:17:04.753 15:42:26 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b3f8fcd2-dbf5-4651-9dfa-eb7d100f4993 00:17:05.011 15:42:26 -- ftl/fio.sh@48 -- # split_bdev=29793405-926a-4311-bf08-64517f27d784 00:17:05.011 15:42:26 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 29793405-926a-4311-bf08-64517f27d784 00:17:05.011 15:42:26 -- ftl/common.sh@35 -- # local name=nvc0 00:17:05.011 15:42:26 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:05.011 15:42:26 -- ftl/common.sh@37 -- # local base_bdev=29793405-926a-4311-bf08-64517f27d784 00:17:05.011 15:42:26 -- ftl/common.sh@38 -- # local cache_size= 00:17:05.011 15:42:26 -- ftl/common.sh@41 -- # get_bdev_size 29793405-926a-4311-bf08-64517f27d784 00:17:05.011 15:42:26 -- common/autotest_common.sh@1357 -- # local bdev_name=29793405-926a-4311-bf08-64517f27d784 00:17:05.011 15:42:26 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:05.011 15:42:26 -- common/autotest_common.sh@1359 -- # local bs 00:17:05.011 15:42:26 -- common/autotest_common.sh@1360 -- # local nb 00:17:05.011 15:42:26 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 29793405-926a-4311-bf08-64517f27d784 00:17:05.269 15:42:26 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:05.269 { 00:17:05.269 "name": "29793405-926a-4311-bf08-64517f27d784", 00:17:05.269 "aliases": [ 00:17:05.269 "lvs/nvme0n1p0" 00:17:05.269 ], 00:17:05.269 "product_name": "Logical Volume", 00:17:05.269 "block_size": 4096, 00:17:05.269 "num_blocks": 26476544, 00:17:05.269 "uuid": "29793405-926a-4311-bf08-64517f27d784", 00:17:05.269 "assigned_rate_limits": { 00:17:05.269 "rw_ios_per_sec": 0, 00:17:05.269 "rw_mbytes_per_sec": 0, 00:17:05.269 "r_mbytes_per_sec": 0, 00:17:05.269 "w_mbytes_per_sec": 0 00:17:05.269 }, 00:17:05.269 "claimed": false, 00:17:05.269 "zoned": false, 00:17:05.269 "supported_io_types": { 00:17:05.269 "read": true, 00:17:05.269 "write": true, 00:17:05.269 "unmap": true, 00:17:05.269 "write_zeroes": true, 00:17:05.269 "flush": false, 00:17:05.269 "reset": true, 00:17:05.269 "compare": false, 00:17:05.269 "compare_and_write": false, 00:17:05.269 "abort": false, 00:17:05.269 "nvme_admin": false, 00:17:05.269 "nvme_io": false 00:17:05.269 }, 00:17:05.269 "driver_specific": { 00:17:05.269 "lvol": { 00:17:05.269 "lvol_store_uuid": "b3f8fcd2-dbf5-4651-9dfa-eb7d100f4993", 00:17:05.269 "base_bdev": "nvme0n1", 00:17:05.269 "thin_provision": true, 00:17:05.269 "snapshot": false, 00:17:05.269 "clone": false, 00:17:05.269 "esnap_clone": false 00:17:05.269 } 00:17:05.269 } 00:17:05.269 } 00:17:05.269 ]' 00:17:05.269 15:42:26 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:05.527 15:42:26 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:05.527 15:42:26 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:05.527 15:42:26 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:05.527 15:42:26 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:05.527 15:42:26 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:05.527 15:42:26 -- ftl/common.sh@41 -- # local base_size=5171 00:17:05.527 15:42:26 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:05.527 15:42:26 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:05.784 15:42:27 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:05.784 15:42:27 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:05.784 15:42:27 -- ftl/common.sh@48 -- # get_bdev_size 29793405-926a-4311-bf08-64517f27d784 00:17:05.784 15:42:27 -- common/autotest_common.sh@1357 -- # local bdev_name=29793405-926a-4311-bf08-64517f27d784 00:17:05.784 15:42:27 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:05.784 15:42:27 -- common/autotest_common.sh@1359 -- # local bs 00:17:05.784 15:42:27 -- common/autotest_common.sh@1360 -- # local nb 00:17:05.784 15:42:27 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 29793405-926a-4311-bf08-64517f27d784 00:17:06.350 15:42:27 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:06.350 { 00:17:06.350 "name": "29793405-926a-4311-bf08-64517f27d784", 00:17:06.350 "aliases": [ 00:17:06.350 "lvs/nvme0n1p0" 00:17:06.350 ], 00:17:06.350 "product_name": "Logical Volume", 00:17:06.350 "block_size": 4096, 00:17:06.350 "num_blocks": 26476544, 00:17:06.350 "uuid": "29793405-926a-4311-bf08-64517f27d784", 00:17:06.350 "assigned_rate_limits": { 00:17:06.350 "rw_ios_per_sec": 0, 00:17:06.350 "rw_mbytes_per_sec": 0, 00:17:06.350 "r_mbytes_per_sec": 0, 00:17:06.350 "w_mbytes_per_sec": 0 00:17:06.350 }, 00:17:06.350 "claimed": false, 00:17:06.350 "zoned": false, 00:17:06.350 "supported_io_types": { 00:17:06.350 "read": true, 00:17:06.350 "write": true, 00:17:06.350 "unmap": true, 00:17:06.350 "write_zeroes": true, 00:17:06.350 "flush": false, 00:17:06.350 "reset": true, 00:17:06.350 "compare": false, 00:17:06.350 "compare_and_write": false, 00:17:06.350 "abort": false, 00:17:06.350 "nvme_admin": false, 00:17:06.350 "nvme_io": false 00:17:06.350 }, 00:17:06.350 "driver_specific": { 00:17:06.350 "lvol": { 00:17:06.350 "lvol_store_uuid": "b3f8fcd2-dbf5-4651-9dfa-eb7d100f4993", 00:17:06.350 "base_bdev": "nvme0n1", 00:17:06.350 "thin_provision": true, 00:17:06.350 "snapshot": false, 00:17:06.350 "clone": false, 00:17:06.350 "esnap_clone": false 00:17:06.350 } 00:17:06.350 } 00:17:06.350 } 00:17:06.350 ]' 00:17:06.350 15:42:27 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:06.350 15:42:27 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:06.350 15:42:27 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:06.350 15:42:27 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:06.350 15:42:27 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:06.350 15:42:27 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:06.350 15:42:27 -- ftl/common.sh@48 -- # cache_size=5171 00:17:06.350 15:42:27 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:06.608 15:42:28 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:06.608 15:42:28 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:06.608 15:42:28 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:06.608 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:06.608 15:42:28 -- ftl/fio.sh@56 -- # get_bdev_size 29793405-926a-4311-bf08-64517f27d784 00:17:06.608 15:42:28 -- common/autotest_common.sh@1357 -- # local bdev_name=29793405-926a-4311-bf08-64517f27d784 00:17:06.608 15:42:28 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:06.608 15:42:28 -- common/autotest_common.sh@1359 -- # local bs 00:17:06.608 15:42:28 -- common/autotest_common.sh@1360 -- # local nb 00:17:06.608 15:42:28 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 29793405-926a-4311-bf08-64517f27d784 00:17:06.866 15:42:28 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:06.866 { 00:17:06.866 "name": "29793405-926a-4311-bf08-64517f27d784", 00:17:06.866 "aliases": [ 00:17:06.867 "lvs/nvme0n1p0" 00:17:06.867 ], 00:17:06.867 "product_name": "Logical Volume", 00:17:06.867 "block_size": 4096, 00:17:06.867 "num_blocks": 26476544, 00:17:06.867 "uuid": "29793405-926a-4311-bf08-64517f27d784", 00:17:06.867 "assigned_rate_limits": { 00:17:06.867 "rw_ios_per_sec": 0, 00:17:06.867 "rw_mbytes_per_sec": 0, 00:17:06.867 "r_mbytes_per_sec": 0, 00:17:06.867 "w_mbytes_per_sec": 0 00:17:06.867 }, 00:17:06.867 "claimed": false, 00:17:06.867 "zoned": false, 00:17:06.867 "supported_io_types": { 00:17:06.867 "read": true, 00:17:06.867 "write": true, 00:17:06.867 "unmap": true, 00:17:06.867 "write_zeroes": true, 00:17:06.867 "flush": false, 00:17:06.867 "reset": true, 00:17:06.867 "compare": false, 00:17:06.867 "compare_and_write": false, 00:17:06.867 "abort": false, 00:17:06.867 "nvme_admin": false, 00:17:06.867 "nvme_io": false 00:17:06.867 }, 00:17:06.867 "driver_specific": { 00:17:06.867 "lvol": { 00:17:06.867 "lvol_store_uuid": "b3f8fcd2-dbf5-4651-9dfa-eb7d100f4993", 00:17:06.867 "base_bdev": "nvme0n1", 00:17:06.867 "thin_provision": true, 00:17:06.867 "snapshot": false, 00:17:06.867 "clone": false, 00:17:06.867 "esnap_clone": false 00:17:06.867 } 00:17:06.867 } 00:17:06.867 } 00:17:06.867 ]' 00:17:06.867 15:42:28 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:06.867 15:42:28 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:06.867 15:42:28 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:07.125 15:42:28 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:07.125 15:42:28 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:07.125 15:42:28 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:07.125 15:42:28 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:07.125 15:42:28 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:07.125 15:42:28 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 29793405-926a-4311-bf08-64517f27d784 -c nvc0n1p0 --l2p_dram_limit 60 00:17:07.385 [2024-07-24 15:42:28.765231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.385 [2024-07-24 15:42:28.765323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:07.385 [2024-07-24 15:42:28.765357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:07.385 [2024-07-24 15:42:28.765377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.385 [2024-07-24 15:42:28.765546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.385 [2024-07-24 15:42:28.765576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:07.385 [2024-07-24 15:42:28.765599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:07.385 [2024-07-24 15:42:28.765616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.385 [2024-07-24 15:42:28.765676] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:07.385 [2024-07-24 15:42:28.767330] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:07.385 [2024-07-24 15:42:28.767388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.385 [2024-07-24 15:42:28.767408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:07.385 [2024-07-24 15:42:28.767431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.721 ms 00:17:07.385 [2024-07-24 15:42:28.767448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.385 [2024-07-24 15:42:28.767643] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7cf4a05e-a244-44dc-ac5a-65be17b0576f 00:17:07.385 [2024-07-24 15:42:28.769022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.385 [2024-07-24 15:42:28.769080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:07.385 [2024-07-24 15:42:28.769126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:07.385 [2024-07-24 15:42:28.769148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.385 [2024-07-24 15:42:28.774763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.385 [2024-07-24 15:42:28.774872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:07.385 [2024-07-24 15:42:28.774898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.454 ms 00:17:07.385 [2024-07-24 15:42:28.774925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.385 [2024-07-24 15:42:28.775170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.385 [2024-07-24 15:42:28.775205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:07.385 [2024-07-24 15:42:28.775226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:17:07.385 [2024-07-24 15:42:28.775250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.385 [2024-07-24 15:42:28.775400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.385 [2024-07-24 15:42:28.775428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:07.385 [2024-07-24 15:42:28.775447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:07.385 [2024-07-24 15:42:28.775467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.385 [2024-07-24 15:42:28.775554] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:07.385 [2024-07-24 15:42:28.782442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.385 [2024-07-24 15:42:28.782538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:07.385 [2024-07-24 15:42:28.782571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.900 ms 00:17:07.385 [2024-07-24 15:42:28.782594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.385 [2024-07-24 15:42:28.782700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.385 [2024-07-24 15:42:28.782722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:07.385 [2024-07-24 15:42:28.782745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:07.385 [2024-07-24 15:42:28.782763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.385 [2024-07-24 15:42:28.782862] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:07.385 [2024-07-24 15:42:28.783227] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:07.385 [2024-07-24 15:42:28.783272] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:07.385 [2024-07-24 15:42:28.783299] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:07.385 [2024-07-24 15:42:28.783324] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:07.385 [2024-07-24 15:42:28.783344] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:07.385 [2024-07-24 15:42:28.783365] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:07.385 [2024-07-24 15:42:28.783382] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:07.385 [2024-07-24 15:42:28.783404] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:07.385 [2024-07-24 15:42:28.783420] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:07.385 [2024-07-24 15:42:28.783441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.385 [2024-07-24 15:42:28.783462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:07.385 [2024-07-24 15:42:28.783483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:17:07.385 [2024-07-24 15:42:28.783500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.385 [2024-07-24 15:42:28.783687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.385 [2024-07-24 15:42:28.783709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:07.385 [2024-07-24 15:42:28.783729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:17:07.385 [2024-07-24 15:42:28.783745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.385 [2024-07-24 15:42:28.783957] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:07.385 [2024-07-24 15:42:28.783980] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:07.385 [2024-07-24 15:42:28.784005] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:07.385 [2024-07-24 15:42:28.784023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.385 [2024-07-24 15:42:28.784044] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:07.385 [2024-07-24 15:42:28.784060] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:07.386 [2024-07-24 15:42:28.784080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:07.386 [2024-07-24 15:42:28.784115] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:07.386 [2024-07-24 15:42:28.784136] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:07.386 [2024-07-24 15:42:28.784153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:07.386 [2024-07-24 15:42:28.784173] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:07.386 [2024-07-24 15:42:28.784190] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:07.386 [2024-07-24 15:42:28.784211] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:07.386 [2024-07-24 15:42:28.784228] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:07.386 [2024-07-24 15:42:28.784247] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:07.386 [2024-07-24 15:42:28.784263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.386 [2024-07-24 15:42:28.784285] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:07.386 [2024-07-24 15:42:28.784301] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:07.386 [2024-07-24 15:42:28.784320] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.386 [2024-07-24 15:42:28.784336] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:07.386 [2024-07-24 15:42:28.784356] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:07.386 [2024-07-24 15:42:28.784371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:07.386 [2024-07-24 15:42:28.784391] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:07.386 [2024-07-24 15:42:28.784407] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:07.386 [2024-07-24 15:42:28.784426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:07.386 [2024-07-24 15:42:28.784442] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:07.386 [2024-07-24 15:42:28.784461] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:07.386 [2024-07-24 15:42:28.784477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:07.386 [2024-07-24 15:42:28.784495] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:07.386 [2024-07-24 15:42:28.784511] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:07.386 [2024-07-24 15:42:28.784530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:07.386 [2024-07-24 15:42:28.784546] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:07.386 [2024-07-24 15:42:28.784573] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:07.386 [2024-07-24 15:42:28.784590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:07.386 [2024-07-24 15:42:28.784609] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:07.386 [2024-07-24 15:42:28.784626] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:07.386 [2024-07-24 15:42:28.784644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:07.386 [2024-07-24 15:42:28.784661] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:07.386 [2024-07-24 15:42:28.784712] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:07.386 [2024-07-24 15:42:28.784729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:07.386 [2024-07-24 15:42:28.784747] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:07.386 [2024-07-24 15:42:28.784765] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:07.386 [2024-07-24 15:42:28.784785] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:07.386 [2024-07-24 15:42:28.784802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.386 [2024-07-24 15:42:28.784823] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:07.386 [2024-07-24 15:42:28.784840] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:07.386 [2024-07-24 15:42:28.784858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:07.386 [2024-07-24 15:42:28.784876] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:07.386 [2024-07-24 15:42:28.784899] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:07.386 [2024-07-24 15:42:28.784915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:07.386 [2024-07-24 15:42:28.784937] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:07.386 [2024-07-24 15:42:28.784958] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:07.386 [2024-07-24 15:42:28.784979] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:07.386 [2024-07-24 15:42:28.784997] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:07.386 [2024-07-24 15:42:28.785017] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:07.386 [2024-07-24 15:42:28.785035] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:07.386 [2024-07-24 15:42:28.785055] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:07.386 [2024-07-24 15:42:28.785072] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:07.386 [2024-07-24 15:42:28.785109] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:07.386 [2024-07-24 15:42:28.785128] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:07.386 [2024-07-24 15:42:28.785148] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:07.386 [2024-07-24 15:42:28.785166] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:07.386 [2024-07-24 15:42:28.785186] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:07.386 [2024-07-24 15:42:28.785204] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:07.386 [2024-07-24 15:42:28.785234] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:07.386 [2024-07-24 15:42:28.785252] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:07.386 [2024-07-24 15:42:28.785274] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:07.386 [2024-07-24 15:42:28.785296] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:07.386 [2024-07-24 15:42:28.785317] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:07.386 [2024-07-24 15:42:28.785335] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:07.386 [2024-07-24 15:42:28.785356] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:07.386 [2024-07-24 15:42:28.785375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.386 [2024-07-24 15:42:28.785395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:07.386 [2024-07-24 15:42:28.785417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.550 ms 00:17:07.386 [2024-07-24 15:42:28.785437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.386 [2024-07-24 15:42:28.810122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.386 [2024-07-24 15:42:28.810223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:07.386 [2024-07-24 15:42:28.810254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.521 ms 00:17:07.386 [2024-07-24 15:42:28.810275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.386 [2024-07-24 15:42:28.810517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.386 [2024-07-24 15:42:28.810546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:07.386 [2024-07-24 15:42:28.810566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:17:07.386 [2024-07-24 15:42:28.810586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.386 [2024-07-24 15:42:28.867757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.386 [2024-07-24 15:42:28.867842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:07.386 [2024-07-24 15:42:28.867874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.040 ms 00:17:07.386 [2024-07-24 15:42:28.867895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.386 [2024-07-24 15:42:28.867977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.386 [2024-07-24 15:42:28.868002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:07.386 [2024-07-24 15:42:28.868022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:07.386 [2024-07-24 15:42:28.868043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.386 [2024-07-24 15:42:28.868551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.386 [2024-07-24 15:42:28.868592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:07.386 [2024-07-24 15:42:28.868612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:17:07.386 [2024-07-24 15:42:28.868641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.386 [2024-07-24 15:42:28.868917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.386 [2024-07-24 15:42:28.868950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:07.386 [2024-07-24 15:42:28.868968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:17:07.386 [2024-07-24 15:42:28.868988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.386 [2024-07-24 15:42:28.902036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.386 [2024-07-24 15:42:28.902412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:07.386 [2024-07-24 15:42:28.902583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.996 ms 00:17:07.386 [2024-07-24 15:42:28.902660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.386 [2024-07-24 15:42:28.931215] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:07.386 [2024-07-24 15:42:28.957614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.387 [2024-07-24 15:42:28.957952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:07.387 [2024-07-24 15:42:28.958121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.622 ms 00:17:07.387 [2024-07-24 15:42:28.958198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.644 [2024-07-24 15:42:29.041066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.644 [2024-07-24 15:42:29.041494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:07.644 [2024-07-24 15:42:29.041555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.636 ms 00:17:07.644 [2024-07-24 15:42:29.041587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.644 [2024-07-24 15:42:29.041722] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:07.645 [2024-07-24 15:42:29.041760] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:11.856 [2024-07-24 15:42:32.726986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.856 [2024-07-24 15:42:32.727078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:11.856 [2024-07-24 15:42:32.727116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3685.282 ms 00:17:11.856 [2024-07-24 15:42:32.727130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.856 [2024-07-24 15:42:32.727425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.856 [2024-07-24 15:42:32.727454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:11.856 [2024-07-24 15:42:32.727472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:17:11.856 [2024-07-24 15:42:32.727485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.856 [2024-07-24 15:42:32.760230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.856 [2024-07-24 15:42:32.760299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:11.856 [2024-07-24 15:42:32.760324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.628 ms 00:17:11.856 [2024-07-24 15:42:32.760337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.856 [2024-07-24 15:42:32.794013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.856 [2024-07-24 15:42:32.794079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:11.856 [2024-07-24 15:42:32.794137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.584 ms 00:17:11.856 [2024-07-24 15:42:32.794151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.856 [2024-07-24 15:42:32.794587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.856 [2024-07-24 15:42:32.794619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:11.857 [2024-07-24 15:42:32.794637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:17:11.857 [2024-07-24 15:42:32.794654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.857 [2024-07-24 15:42:32.875002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.857 [2024-07-24 15:42:32.875100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:11.857 [2024-07-24 15:42:32.875128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.235 ms 00:17:11.857 [2024-07-24 15:42:32.875142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.857 [2024-07-24 15:42:32.909364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.857 [2024-07-24 15:42:32.909451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:11.857 [2024-07-24 15:42:32.909477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.108 ms 00:17:11.857 [2024-07-24 15:42:32.909489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.857 [2024-07-24 15:42:32.913552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.857 [2024-07-24 15:42:32.913600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:11.857 [2024-07-24 15:42:32.913625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.963 ms 00:17:11.857 [2024-07-24 15:42:32.913637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.857 [2024-07-24 15:42:32.946647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.857 [2024-07-24 15:42:32.946729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:11.857 [2024-07-24 15:42:32.946754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.879 ms 00:17:11.857 [2024-07-24 15:42:32.946766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.857 [2024-07-24 15:42:32.946877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.857 [2024-07-24 15:42:32.946899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:11.857 [2024-07-24 15:42:32.946916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:11.857 [2024-07-24 15:42:32.946928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.857 [2024-07-24 15:42:32.947130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.857 [2024-07-24 15:42:32.947154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:11.857 [2024-07-24 15:42:32.947171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:11.857 [2024-07-24 15:42:32.947183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.857 [2024-07-24 15:42:32.948349] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4182.641 ms, result 0 00:17:11.857 { 00:17:11.857 "name": "ftl0", 00:17:11.857 "uuid": "7cf4a05e-a244-44dc-ac5a-65be17b0576f" 00:17:11.857 } 00:17:11.857 15:42:32 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:11.857 15:42:32 -- common/autotest_common.sh@887 -- # local bdev_name=ftl0 00:17:11.857 15:42:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:11.857 15:42:32 -- common/autotest_common.sh@889 -- # local i 00:17:11.857 15:42:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:11.857 15:42:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:11.857 15:42:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:11.857 15:42:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:12.115 [ 00:17:12.115 { 00:17:12.115 "name": "ftl0", 00:17:12.115 "aliases": [ 00:17:12.115 "7cf4a05e-a244-44dc-ac5a-65be17b0576f" 00:17:12.115 ], 00:17:12.115 "product_name": "FTL disk", 00:17:12.115 "block_size": 4096, 00:17:12.115 "num_blocks": 20971520, 00:17:12.115 "uuid": "7cf4a05e-a244-44dc-ac5a-65be17b0576f", 00:17:12.115 "assigned_rate_limits": { 00:17:12.115 "rw_ios_per_sec": 0, 00:17:12.115 "rw_mbytes_per_sec": 0, 00:17:12.115 "r_mbytes_per_sec": 0, 00:17:12.115 "w_mbytes_per_sec": 0 00:17:12.115 }, 00:17:12.115 "claimed": false, 00:17:12.115 "zoned": false, 00:17:12.115 "supported_io_types": { 00:17:12.115 "read": true, 00:17:12.115 "write": true, 00:17:12.115 "unmap": true, 00:17:12.115 "write_zeroes": true, 00:17:12.115 "flush": true, 00:17:12.115 "reset": false, 00:17:12.115 "compare": false, 00:17:12.115 "compare_and_write": false, 00:17:12.115 "abort": false, 00:17:12.115 "nvme_admin": false, 00:17:12.115 "nvme_io": false 00:17:12.115 }, 00:17:12.115 "driver_specific": { 00:17:12.115 "ftl": { 00:17:12.115 "base_bdev": "29793405-926a-4311-bf08-64517f27d784", 00:17:12.115 "cache": "nvc0n1p0" 00:17:12.115 } 00:17:12.115 } 00:17:12.115 } 00:17:12.115 ] 00:17:12.115 15:42:33 -- common/autotest_common.sh@895 -- # return 0 00:17:12.115 15:42:33 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:12.115 15:42:33 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:12.372 15:42:33 -- ftl/fio.sh@70 -- # echo ']}' 00:17:12.372 15:42:33 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:12.372 [2024-07-24 15:42:33.921545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.372 [2024-07-24 15:42:33.922178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:12.373 [2024-07-24 15:42:33.922228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:12.373 [2024-07-24 15:42:33.922247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.373 [2024-07-24 15:42:33.922306] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:12.373 [2024-07-24 15:42:33.925768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.373 [2024-07-24 15:42:33.925805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:12.373 [2024-07-24 15:42:33.925825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.428 ms 00:17:12.373 [2024-07-24 15:42:33.925837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.373 [2024-07-24 15:42:33.926393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.373 [2024-07-24 15:42:33.926426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:12.373 [2024-07-24 15:42:33.926445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:17:12.373 [2024-07-24 15:42:33.926457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.373 [2024-07-24 15:42:33.929956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.373 [2024-07-24 15:42:33.930035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:12.373 [2024-07-24 15:42:33.930069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.454 ms 00:17:12.373 [2024-07-24 15:42:33.930110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.373 [2024-07-24 15:42:33.938900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.373 [2024-07-24 15:42:33.938956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:12.373 [2024-07-24 15:42:33.939005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.721 ms 00:17:12.373 [2024-07-24 15:42:33.939028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.631 [2024-07-24 15:42:33.979225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.631 [2024-07-24 15:42:33.979286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:12.631 [2024-07-24 15:42:33.979311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.977 ms 00:17:12.631 [2024-07-24 15:42:33.979324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.631 [2024-07-24 15:42:33.998244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.631 [2024-07-24 15:42:33.998304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:12.631 [2024-07-24 15:42:33.998328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.838 ms 00:17:12.631 [2024-07-24 15:42:33.998341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.631 [2024-07-24 15:42:33.998608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.631 [2024-07-24 15:42:33.998631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:12.631 [2024-07-24 15:42:33.998648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:17:12.631 [2024-07-24 15:42:33.998682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.631 [2024-07-24 15:42:34.030317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.631 [2024-07-24 15:42:34.030375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:12.631 [2024-07-24 15:42:34.030397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.583 ms 00:17:12.631 [2024-07-24 15:42:34.030410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.631 [2024-07-24 15:42:34.061832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.631 [2024-07-24 15:42:34.061889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:12.631 [2024-07-24 15:42:34.061912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.344 ms 00:17:12.631 [2024-07-24 15:42:34.061925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.631 [2024-07-24 15:42:34.093227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.631 [2024-07-24 15:42:34.093292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:12.631 [2024-07-24 15:42:34.093315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.219 ms 00:17:12.631 [2024-07-24 15:42:34.093328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.631 [2024-07-24 15:42:34.124751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.631 [2024-07-24 15:42:34.124810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:12.631 [2024-07-24 15:42:34.124833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.224 ms 00:17:12.631 [2024-07-24 15:42:34.124846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.631 [2024-07-24 15:42:34.124923] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:12.631 [2024-07-24 15:42:34.124949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:12.631 [2024-07-24 15:42:34.124971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:12.631 [2024-07-24 15:42:34.124984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:12.631 [2024-07-24 15:42:34.124998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:12.631 [2024-07-24 15:42:34.125010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:12.631 [2024-07-24 15:42:34.125024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:12.631 [2024-07-24 15:42:34.125036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.125993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:12.632 [2024-07-24 15:42:34.126297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:12.633 [2024-07-24 15:42:34.126310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:12.633 [2024-07-24 15:42:34.126324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:12.633 [2024-07-24 15:42:34.126336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:12.633 [2024-07-24 15:42:34.126350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:12.633 [2024-07-24 15:42:34.126371] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:12.633 [2024-07-24 15:42:34.126385] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7cf4a05e-a244-44dc-ac5a-65be17b0576f 00:17:12.633 [2024-07-24 15:42:34.126397] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:12.633 [2024-07-24 15:42:34.126411] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:12.633 [2024-07-24 15:42:34.126422] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:12.633 [2024-07-24 15:42:34.126435] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:12.633 [2024-07-24 15:42:34.126447] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:12.633 [2024-07-24 15:42:34.126460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:12.633 [2024-07-24 15:42:34.126472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:12.633 [2024-07-24 15:42:34.126485] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:12.633 [2024-07-24 15:42:34.126496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:12.633 [2024-07-24 15:42:34.126511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.633 [2024-07-24 15:42:34.126523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:12.633 [2024-07-24 15:42:34.126538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.593 ms 00:17:12.633 [2024-07-24 15:42:34.126553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.633 [2024-07-24 15:42:34.143678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.633 [2024-07-24 15:42:34.143729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:12.633 [2024-07-24 15:42:34.143751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.029 ms 00:17:12.633 [2024-07-24 15:42:34.143764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.633 [2024-07-24 15:42:34.144030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.633 [2024-07-24 15:42:34.144047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:12.633 [2024-07-24 15:42:34.144068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:17:12.633 [2024-07-24 15:42:34.144080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.633 [2024-07-24 15:42:34.202116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.633 [2024-07-24 15:42:34.202174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:12.633 [2024-07-24 15:42:34.202197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.633 [2024-07-24 15:42:34.202210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.633 [2024-07-24 15:42:34.202328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.633 [2024-07-24 15:42:34.202345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:12.633 [2024-07-24 15:42:34.202364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.633 [2024-07-24 15:42:34.202375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.633 [2024-07-24 15:42:34.202517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.633 [2024-07-24 15:42:34.202538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:12.633 [2024-07-24 15:42:34.202557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.633 [2024-07-24 15:42:34.202569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.633 [2024-07-24 15:42:34.202607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.633 [2024-07-24 15:42:34.202621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:12.633 [2024-07-24 15:42:34.202635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.633 [2024-07-24 15:42:34.202649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.905 [2024-07-24 15:42:34.317308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.905 [2024-07-24 15:42:34.317370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:12.905 [2024-07-24 15:42:34.317393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.905 [2024-07-24 15:42:34.317405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.905 [2024-07-24 15:42:34.356675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.905 [2024-07-24 15:42:34.356737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:12.905 [2024-07-24 15:42:34.356764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.905 [2024-07-24 15:42:34.356777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.905 [2024-07-24 15:42:34.356896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.905 [2024-07-24 15:42:34.356917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:12.905 [2024-07-24 15:42:34.356933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.905 [2024-07-24 15:42:34.356945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.905 [2024-07-24 15:42:34.357032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.905 [2024-07-24 15:42:34.357050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:12.905 [2024-07-24 15:42:34.357065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.905 [2024-07-24 15:42:34.357077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.905 [2024-07-24 15:42:34.357253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.905 [2024-07-24 15:42:34.357273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:12.905 [2024-07-24 15:42:34.357289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.905 [2024-07-24 15:42:34.357300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.905 [2024-07-24 15:42:34.357376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.905 [2024-07-24 15:42:34.357395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:12.905 [2024-07-24 15:42:34.357410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.905 [2024-07-24 15:42:34.357421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.905 [2024-07-24 15:42:34.357481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.905 [2024-07-24 15:42:34.357502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:12.905 [2024-07-24 15:42:34.357519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.906 [2024-07-24 15:42:34.357530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.906 [2024-07-24 15:42:34.357599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.906 [2024-07-24 15:42:34.357616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:12.906 [2024-07-24 15:42:34.357631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.906 [2024-07-24 15:42:34.357643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.906 [2024-07-24 15:42:34.357840] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 436.253 ms, result 0 00:17:12.906 true 00:17:12.906 15:42:34 -- ftl/fio.sh@75 -- # killprocess 71513 00:17:12.906 15:42:34 -- common/autotest_common.sh@926 -- # '[' -z 71513 ']' 00:17:12.906 15:42:34 -- common/autotest_common.sh@930 -- # kill -0 71513 00:17:12.906 15:42:34 -- common/autotest_common.sh@931 -- # uname 00:17:12.906 15:42:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:12.906 15:42:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71513 00:17:12.906 killing process with pid 71513 00:17:12.906 15:42:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:12.906 15:42:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:12.906 15:42:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71513' 00:17:12.906 15:42:34 -- common/autotest_common.sh@945 -- # kill 71513 00:17:12.906 15:42:34 -- common/autotest_common.sh@950 -- # wait 71513 00:17:18.180 15:42:38 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:18.180 15:42:38 -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:18.180 15:42:38 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:18.180 15:42:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:18.180 15:42:38 -- common/autotest_common.sh@10 -- # set +x 00:17:18.180 15:42:38 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:18.180 15:42:38 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:18.180 15:42:38 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:17:18.180 15:42:38 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:18.180 15:42:38 -- common/autotest_common.sh@1318 -- # local sanitizers 00:17:18.180 15:42:38 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:18.180 15:42:38 -- common/autotest_common.sh@1320 -- # shift 00:17:18.180 15:42:38 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:17:18.180 15:42:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:17:18.180 15:42:38 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:18.180 15:42:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:17:18.180 15:42:38 -- common/autotest_common.sh@1324 -- # grep libasan 00:17:18.180 15:42:39 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:18.180 15:42:39 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:18.180 15:42:39 -- common/autotest_common.sh@1326 -- # break 00:17:18.180 15:42:39 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:18.180 15:42:39 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:18.180 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:18.180 fio-3.35 00:17:18.180 Starting 1 thread 00:17:23.446 00:17:23.446 test: (groupid=0, jobs=1): err= 0: pid=71756: Wed Jul 24 15:42:44 2024 00:17:23.446 read: IOPS=1000, BW=66.4MiB/s (69.7MB/s)(255MiB/3832msec) 00:17:23.446 slat (nsec): min=5645, max=30347, avg=7140.15, stdev=2581.05 00:17:23.446 clat (usec): min=309, max=1522, avg=446.09, stdev=55.95 00:17:23.446 lat (usec): min=315, max=1528, avg=453.23, stdev=56.71 00:17:23.446 clat percentiles (usec): 00:17:23.446 | 1.00th=[ 347], 5.00th=[ 371], 10.00th=[ 375], 20.00th=[ 396], 00:17:23.447 | 30.00th=[ 424], 40.00th=[ 437], 50.00th=[ 441], 60.00th=[ 449], 00:17:23.447 | 70.00th=[ 461], 80.00th=[ 486], 90.00th=[ 515], 95.00th=[ 537], 00:17:23.447 | 99.00th=[ 594], 99.50th=[ 611], 99.90th=[ 676], 99.95th=[ 971], 00:17:23.447 | 99.99th=[ 1516] 00:17:23.447 write: IOPS=1007, BW=66.9MiB/s (70.1MB/s)(256MiB/3828msec); 0 zone resets 00:17:23.447 slat (usec): min=20, max=108, avg=23.90, stdev= 4.84 00:17:23.447 clat (usec): min=350, max=2977, avg=505.58, stdev=74.70 00:17:23.447 lat (usec): min=375, max=2999, avg=529.48, stdev=74.95 00:17:23.447 clat percentiles (usec): 00:17:23.447 | 1.00th=[ 396], 5.00th=[ 412], 10.00th=[ 437], 20.00th=[ 465], 00:17:23.447 | 30.00th=[ 469], 40.00th=[ 482], 50.00th=[ 494], 60.00th=[ 515], 00:17:23.447 | 70.00th=[ 537], 80.00th=[ 545], 90.00th=[ 578], 95.00th=[ 603], 00:17:23.447 | 99.00th=[ 734], 99.50th=[ 807], 99.90th=[ 922], 99.95th=[ 1074], 00:17:23.447 | 99.99th=[ 2966] 00:17:23.447 bw ( KiB/s): min=65144, max=71264, per=99.92%, avg=68446.86, stdev=2257.65, samples=7 00:17:23.447 iops : min= 958, max= 1048, avg=1006.57, stdev=33.20, samples=7 00:17:23.447 lat (usec) : 500=68.41%, 750=31.15%, 1000=0.40% 00:17:23.447 lat (msec) : 2=0.03%, 4=0.01% 00:17:23.447 cpu : usr=99.32%, sys=0.10%, ctx=5, majf=0, minf=1318 00:17:23.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:23.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.447 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:23.447 00:17:23.447 Run status group 0 (all jobs): 00:17:23.447 READ: bw=66.4MiB/s (69.7MB/s), 66.4MiB/s-66.4MiB/s (69.7MB/s-69.7MB/s), io=255MiB (267MB), run=3832-3832msec 00:17:23.447 WRITE: bw=66.9MiB/s (70.1MB/s), 66.9MiB/s-66.9MiB/s (70.1MB/s-70.1MB/s), io=256MiB (269MB), run=3828-3828msec 00:17:24.397 ----------------------------------------------------- 00:17:24.397 Suppressions used: 00:17:24.397 count bytes template 00:17:24.397 1 5 /usr/src/fio/parse.c 00:17:24.397 1 8 libtcmalloc_minimal.so 00:17:24.397 1 904 libcrypto.so 00:17:24.397 ----------------------------------------------------- 00:17:24.397 00:17:24.397 15:42:45 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:24.397 15:42:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:24.397 15:42:45 -- common/autotest_common.sh@10 -- # set +x 00:17:24.397 15:42:45 -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:24.397 15:42:45 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:24.397 15:42:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:24.397 15:42:45 -- common/autotest_common.sh@10 -- # set +x 00:17:24.397 15:42:45 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:24.397 15:42:45 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:24.397 15:42:45 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:17:24.397 15:42:45 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:24.397 15:42:45 -- common/autotest_common.sh@1318 -- # local sanitizers 00:17:24.397 15:42:45 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:24.397 15:42:45 -- common/autotest_common.sh@1320 -- # shift 00:17:24.397 15:42:45 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:17:24.397 15:42:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:17:24.397 15:42:45 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:24.397 15:42:45 -- common/autotest_common.sh@1324 -- # grep libasan 00:17:24.397 15:42:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:17:24.397 15:42:45 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:24.397 15:42:45 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:24.397 15:42:45 -- common/autotest_common.sh@1326 -- # break 00:17:24.397 15:42:45 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:24.397 15:42:45 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:24.654 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:24.654 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:24.654 fio-3.35 00:17:24.654 Starting 2 threads 00:17:56.740 00:17:56.740 first_half: (groupid=0, jobs=1): err= 0: pid=71859: Wed Jul 24 15:43:16 2024 00:17:56.740 read: IOPS=2271, BW=9085KiB/s (9303kB/s)(255MiB/28756msec) 00:17:56.740 slat (nsec): min=4751, max=40186, avg=7497.31, stdev=2011.26 00:17:56.740 clat (usec): min=1123, max=341208, avg=44865.97, stdev=21835.87 00:17:56.740 lat (usec): min=1131, max=341214, avg=44873.47, stdev=21836.07 00:17:56.740 clat percentiles (msec): 00:17:56.740 | 1.00th=[ 20], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:17:56.740 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 42], 00:17:56.740 | 70.00th=[ 43], 80.00th=[ 46], 90.00th=[ 54], 95.00th=[ 70], 00:17:56.740 | 99.00th=[ 167], 99.50th=[ 192], 99.90th=[ 234], 99.95th=[ 284], 00:17:56.740 | 99.99th=[ 330] 00:17:56.740 write: IOPS=2751, BW=10.7MiB/s (11.3MB/s)(256MiB/23816msec); 0 zone resets 00:17:56.740 slat (usec): min=5, max=292, avg= 9.25, stdev= 4.79 00:17:56.740 clat (usec): min=525, max=113918, avg=11423.91, stdev=19534.18 00:17:56.740 lat (usec): min=532, max=113944, avg=11433.16, stdev=19534.25 00:17:56.740 clat percentiles (usec): 00:17:56.740 | 1.00th=[ 1090], 5.00th=[ 1434], 10.00th=[ 1680], 20.00th=[ 2180], 00:17:56.740 | 30.00th=[ 3523], 40.00th=[ 4817], 50.00th=[ 6128], 60.00th=[ 7111], 00:17:56.740 | 70.00th=[ 8717], 80.00th=[ 12780], 90.00th=[ 15795], 95.00th=[ 58459], 00:17:56.740 | 99.00th=[ 98042], 99.50th=[103285], 99.90th=[109577], 99.95th=[110625], 00:17:56.740 | 99.99th=[112722] 00:17:56.740 bw ( KiB/s): min= 1576, max=42424, per=100.00%, avg=21845.33, stdev=11288.22, samples=24 00:17:56.740 iops : min= 394, max=10606, avg=5461.33, stdev=2822.05, samples=24 00:17:56.740 lat (usec) : 750=0.03%, 1000=0.27% 00:17:56.740 lat (msec) : 2=8.32%, 4=8.48%, 10=20.05%, 20=9.83%, 50=44.16% 00:17:56.740 lat (msec) : 100=6.97%, 250=1.84%, 500=0.04% 00:17:56.740 cpu : usr=99.28%, sys=0.10%, ctx=51, majf=0, minf=5565 00:17:56.740 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:56.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.740 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:56.740 issued rwts: total=65309,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:56.740 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:56.740 second_half: (groupid=0, jobs=1): err= 0: pid=71860: Wed Jul 24 15:43:16 2024 00:17:56.740 read: IOPS=2251, BW=9007KiB/s (9223kB/s)(255MiB/29008msec) 00:17:56.740 slat (nsec): min=4633, max=50950, avg=7586.62, stdev=1981.75 00:17:56.740 clat (usec): min=1208, max=375839, avg=44535.98, stdev=26989.70 00:17:56.740 lat (usec): min=1216, max=375850, avg=44543.56, stdev=26989.98 00:17:56.740 clat percentiles (msec): 00:17:56.740 | 1.00th=[ 11], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:17:56.740 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 41], 00:17:56.740 | 70.00th=[ 43], 80.00th=[ 45], 90.00th=[ 51], 95.00th=[ 70], 00:17:56.740 | 99.00th=[ 190], 99.50th=[ 218], 99.90th=[ 351], 99.95th=[ 368], 00:17:56.740 | 99.99th=[ 376] 00:17:56.740 write: IOPS=2501, BW=9.77MiB/s (10.2MB/s)(256MiB/26199msec); 0 zone resets 00:17:56.740 slat (usec): min=5, max=200, avg= 9.46, stdev= 4.58 00:17:56.740 clat (usec): min=423, max=114768, avg=12240.95, stdev=20771.30 00:17:56.740 lat (usec): min=455, max=114775, avg=12250.41, stdev=20771.47 00:17:56.740 clat percentiles (usec): 00:17:56.740 | 1.00th=[ 971], 5.00th=[ 1336], 10.00th=[ 1565], 20.00th=[ 1909], 00:17:56.740 | 30.00th=[ 2409], 40.00th=[ 3851], 50.00th=[ 5538], 60.00th=[ 6915], 00:17:56.740 | 70.00th=[ 8455], 80.00th=[ 13173], 90.00th=[ 34866], 95.00th=[ 54789], 00:17:56.740 | 99.00th=[ 99091], 99.50th=[103285], 99.90th=[110625], 99.95th=[112722], 00:17:56.740 | 99.99th=[113771] 00:17:56.740 bw ( KiB/s): min= 40, max=57656, per=100.00%, avg=20164.92, stdev=14600.95, samples=26 00:17:56.740 iops : min= 10, max=14414, avg=5041.23, stdev=3650.24, samples=26 00:17:56.740 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.51% 00:17:56.740 lat (msec) : 2=10.82%, 4=9.29%, 10=16.84%, 20=8.63%, 50=45.80% 00:17:56.740 lat (msec) : 100=5.91%, 250=1.99%, 500=0.13% 00:17:56.740 cpu : usr=99.24%, sys=0.13%, ctx=42, majf=0, minf=5546 00:17:56.740 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:56.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.740 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:56.740 issued rwts: total=65317,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:56.740 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:56.740 00:17:56.740 Run status group 0 (all jobs): 00:17:56.740 READ: bw=17.6MiB/s (18.4MB/s), 9007KiB/s-9085KiB/s (9223kB/s-9303kB/s), io=510MiB (535MB), run=28756-29008msec 00:17:56.740 WRITE: bw=19.5MiB/s (20.5MB/s), 9.77MiB/s-10.7MiB/s (10.2MB/s-11.3MB/s), io=512MiB (537MB), run=23816-26199msec 00:17:57.304 ----------------------------------------------------- 00:17:57.304 Suppressions used: 00:17:57.304 count bytes template 00:17:57.304 2 10 /usr/src/fio/parse.c 00:17:57.304 5 480 /usr/src/fio/iolog.c 00:17:57.304 1 8 libtcmalloc_minimal.so 00:17:57.304 1 904 libcrypto.so 00:17:57.304 ----------------------------------------------------- 00:17:57.304 00:17:57.304 15:43:18 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:57.304 15:43:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:57.304 15:43:18 -- common/autotest_common.sh@10 -- # set +x 00:17:57.304 15:43:18 -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:57.304 15:43:18 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:57.304 15:43:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:57.304 15:43:18 -- common/autotest_common.sh@10 -- # set +x 00:17:57.304 15:43:18 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:57.304 15:43:18 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:57.304 15:43:18 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:17:57.304 15:43:18 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:57.304 15:43:18 -- common/autotest_common.sh@1318 -- # local sanitizers 00:17:57.304 15:43:18 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:57.304 15:43:18 -- common/autotest_common.sh@1320 -- # shift 00:17:57.304 15:43:18 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:17:57.304 15:43:18 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:17:57.304 15:43:18 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:57.304 15:43:18 -- common/autotest_common.sh@1324 -- # grep libasan 00:17:57.304 15:43:18 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:17:57.304 15:43:18 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:57.304 15:43:18 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:57.304 15:43:18 -- common/autotest_common.sh@1326 -- # break 00:17:57.304 15:43:18 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:57.304 15:43:18 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:57.560 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:57.560 fio-3.35 00:17:57.560 Starting 1 thread 00:18:15.741 00:18:15.741 test: (groupid=0, jobs=1): err= 0: pid=72224: Wed Jul 24 15:43:36 2024 00:18:15.741 read: IOPS=6288, BW=24.6MiB/s (25.8MB/s)(255MiB/10369msec) 00:18:15.741 slat (nsec): min=4709, max=73606, avg=6802.19, stdev=1586.68 00:18:15.741 clat (usec): min=774, max=38371, avg=20344.58, stdev=2146.90 00:18:15.741 lat (usec): min=779, max=38376, avg=20351.38, stdev=2147.01 00:18:15.741 clat percentiles (usec): 00:18:15.741 | 1.00th=[18482], 5.00th=[18744], 10.00th=[18744], 20.00th=[19006], 00:18:15.741 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19530], 60.00th=[19792], 00:18:15.741 | 70.00th=[20841], 80.00th=[21627], 90.00th=[22676], 95.00th=[24511], 00:18:15.741 | 99.00th=[28443], 99.50th=[30016], 99.90th=[31327], 99.95th=[33817], 00:18:15.741 | 99.99th=[37487] 00:18:15.741 write: IOPS=11.2k, BW=43.8MiB/s (46.0MB/s)(256MiB/5841msec); 0 zone resets 00:18:15.741 slat (usec): min=6, max=473, avg= 9.41, stdev= 5.10 00:18:15.741 clat (usec): min=627, max=72211, avg=11348.02, stdev=13962.90 00:18:15.741 lat (usec): min=635, max=72220, avg=11357.42, stdev=13962.95 00:18:15.741 clat percentiles (usec): 00:18:15.741 | 1.00th=[ 971], 5.00th=[ 1172], 10.00th=[ 1303], 20.00th=[ 1516], 00:18:15.741 | 30.00th=[ 1745], 40.00th=[ 2278], 50.00th=[ 7635], 60.00th=[ 8848], 00:18:15.741 | 70.00th=[10159], 80.00th=[12780], 90.00th=[40109], 95.00th=[43254], 00:18:15.741 | 99.00th=[50070], 99.50th=[51643], 99.90th=[54789], 99.95th=[59507], 00:18:15.741 | 99.99th=[67634] 00:18:15.741 bw ( KiB/s): min=23416, max=64080, per=97.35%, avg=43690.67, stdev=9979.05, samples=12 00:18:15.741 iops : min= 5854, max=16020, avg=10922.67, stdev=2494.76, samples=12 00:18:15.741 lat (usec) : 750=0.01%, 1000=0.63% 00:18:15.741 lat (msec) : 2=17.94%, 4=2.36%, 10=13.74%, 20=38.03%, 50=26.76% 00:18:15.741 lat (msec) : 100=0.52% 00:18:15.741 cpu : usr=99.20%, sys=0.20%, ctx=22, majf=0, minf=5567 00:18:15.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:15.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.741 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:15.741 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:15.741 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:15.741 00:18:15.741 Run status group 0 (all jobs): 00:18:15.741 READ: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=255MiB (267MB), run=10369-10369msec 00:18:15.741 WRITE: bw=43.8MiB/s (46.0MB/s), 43.8MiB/s-43.8MiB/s (46.0MB/s-46.0MB/s), io=256MiB (268MB), run=5841-5841msec 00:18:17.115 ----------------------------------------------------- 00:18:17.115 Suppressions used: 00:18:17.115 count bytes template 00:18:17.115 1 5 /usr/src/fio/parse.c 00:18:17.115 2 192 /usr/src/fio/iolog.c 00:18:17.115 1 8 libtcmalloc_minimal.so 00:18:17.115 1 904 libcrypto.so 00:18:17.115 ----------------------------------------------------- 00:18:17.115 00:18:17.115 15:43:38 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:17.115 15:43:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:17.115 15:43:38 -- common/autotest_common.sh@10 -- # set +x 00:18:17.115 15:43:38 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:17.115 15:43:38 -- ftl/fio.sh@85 -- # remove_shm 00:18:17.115 Remove shared memory files 00:18:17.115 15:43:38 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:17.115 15:43:38 -- ftl/common.sh@205 -- # rm -f rm -f 00:18:17.115 15:43:38 -- ftl/common.sh@206 -- # rm -f rm -f 00:18:17.115 15:43:38 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56437 /dev/shm/spdk_tgt_trace.pid70426 00:18:17.115 15:43:38 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:17.115 15:43:38 -- ftl/common.sh@209 -- # rm -f rm -f 00:18:17.115 00:18:17.115 real 1m15.821s 00:18:17.115 user 2m51.176s 00:18:17.115 sys 0m3.944s 00:18:17.115 15:43:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:17.115 15:43:38 -- common/autotest_common.sh@10 -- # set +x 00:18:17.115 ************************************ 00:18:17.115 END TEST ftl_fio_basic 00:18:17.115 ************************************ 00:18:17.115 15:43:38 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:18:17.115 15:43:38 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:17.115 15:43:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:17.115 15:43:38 -- common/autotest_common.sh@10 -- # set +x 00:18:17.115 ************************************ 00:18:17.115 START TEST ftl_bdevperf 00:18:17.115 ************************************ 00:18:17.115 15:43:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:18:17.115 * Looking for test storage... 00:18:17.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:17.115 15:43:38 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:17.115 15:43:38 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:17.115 15:43:38 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:17.115 15:43:38 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:17.115 15:43:38 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:17.115 15:43:38 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:17.115 15:43:38 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:17.115 15:43:38 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:17.115 15:43:38 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:17.115 15:43:38 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:17.115 15:43:38 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:17.115 15:43:38 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:17.115 15:43:38 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:17.115 15:43:38 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:17.115 15:43:38 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:17.115 15:43:38 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:17.115 15:43:38 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:17.115 15:43:38 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:17.115 15:43:38 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:17.116 15:43:38 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:17.116 15:43:38 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:17.116 15:43:38 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:17.116 15:43:38 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:17.116 15:43:38 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:17.116 15:43:38 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:17.116 15:43:38 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:17.116 15:43:38 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:17.116 15:43:38 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:17.116 15:43:38 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:17.116 15:43:38 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:18:17.116 15:43:38 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:18:17.116 15:43:38 -- ftl/bdevperf.sh@13 -- # use_append= 00:18:17.116 15:43:38 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:17.116 15:43:38 -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:17.116 15:43:38 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:18:17.116 15:43:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:17.116 15:43:38 -- common/autotest_common.sh@10 -- # set +x 00:18:17.116 15:43:38 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=72478 00:18:17.116 15:43:38 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:17.116 15:43:38 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:17.116 15:43:38 -- ftl/bdevperf.sh@22 -- # waitforlisten 72478 00:18:17.116 15:43:38 -- common/autotest_common.sh@819 -- # '[' -z 72478 ']' 00:18:17.116 15:43:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.116 15:43:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:17.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.116 15:43:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.116 15:43:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:17.116 15:43:38 -- common/autotest_common.sh@10 -- # set +x 00:18:17.116 [2024-07-24 15:43:38.642011] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:17.116 [2024-07-24 15:43:38.642191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72478 ] 00:18:17.374 [2024-07-24 15:43:38.817767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.633 [2024-07-24 15:43:39.038993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.199 15:43:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:18.199 15:43:39 -- common/autotest_common.sh@852 -- # return 0 00:18:18.199 15:43:39 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:18:18.199 15:43:39 -- ftl/common.sh@54 -- # local name=nvme0 00:18:18.199 15:43:39 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:18:18.199 15:43:39 -- ftl/common.sh@56 -- # local size=103424 00:18:18.199 15:43:39 -- ftl/common.sh@59 -- # local base_bdev 00:18:18.199 15:43:39 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:18:18.456 15:43:40 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:18.456 15:43:40 -- ftl/common.sh@62 -- # local base_size 00:18:18.456 15:43:40 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:18.456 15:43:40 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:18:18.456 15:43:40 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:18.456 15:43:40 -- common/autotest_common.sh@1359 -- # local bs 00:18:18.456 15:43:40 -- common/autotest_common.sh@1360 -- # local nb 00:18:18.457 15:43:40 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:18.715 15:43:40 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:18.715 { 00:18:18.715 "name": "nvme0n1", 00:18:18.715 "aliases": [ 00:18:18.715 "24a68ec2-7679-4723-95fd-810c5f19cb7a" 00:18:18.715 ], 00:18:18.715 "product_name": "NVMe disk", 00:18:18.715 "block_size": 4096, 00:18:18.715 "num_blocks": 1310720, 00:18:18.715 "uuid": "24a68ec2-7679-4723-95fd-810c5f19cb7a", 00:18:18.715 "assigned_rate_limits": { 00:18:18.715 "rw_ios_per_sec": 0, 00:18:18.715 "rw_mbytes_per_sec": 0, 00:18:18.715 "r_mbytes_per_sec": 0, 00:18:18.715 "w_mbytes_per_sec": 0 00:18:18.715 }, 00:18:18.715 "claimed": true, 00:18:18.715 "claim_type": "read_many_write_one", 00:18:18.715 "zoned": false, 00:18:18.715 "supported_io_types": { 00:18:18.715 "read": true, 00:18:18.715 "write": true, 00:18:18.715 "unmap": true, 00:18:18.715 "write_zeroes": true, 00:18:18.715 "flush": true, 00:18:18.715 "reset": true, 00:18:18.715 "compare": true, 00:18:18.715 "compare_and_write": false, 00:18:18.715 "abort": true, 00:18:18.715 "nvme_admin": true, 00:18:18.715 "nvme_io": true 00:18:18.715 }, 00:18:18.715 "driver_specific": { 00:18:18.715 "nvme": [ 00:18:18.715 { 00:18:18.715 "pci_address": "0000:00:07.0", 00:18:18.715 "trid": { 00:18:18.715 "trtype": "PCIe", 00:18:18.715 "traddr": "0000:00:07.0" 00:18:18.715 }, 00:18:18.715 "ctrlr_data": { 00:18:18.715 "cntlid": 0, 00:18:18.715 "vendor_id": "0x1b36", 00:18:18.715 "model_number": "QEMU NVMe Ctrl", 00:18:18.715 "serial_number": "12341", 00:18:18.715 "firmware_revision": "8.0.0", 00:18:18.715 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:18.715 "oacs": { 00:18:18.715 "security": 0, 00:18:18.715 "format": 1, 00:18:18.715 "firmware": 0, 00:18:18.715 "ns_manage": 1 00:18:18.715 }, 00:18:18.715 "multi_ctrlr": false, 00:18:18.715 "ana_reporting": false 00:18:18.715 }, 00:18:18.715 "vs": { 00:18:18.715 "nvme_version": "1.4" 00:18:18.715 }, 00:18:18.715 "ns_data": { 00:18:18.715 "id": 1, 00:18:18.715 "can_share": false 00:18:18.715 } 00:18:18.715 } 00:18:18.715 ], 00:18:18.715 "mp_policy": "active_passive" 00:18:18.715 } 00:18:18.715 } 00:18:18.715 ]' 00:18:18.715 15:43:40 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:18.973 15:43:40 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:18.973 15:43:40 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:18.973 15:43:40 -- common/autotest_common.sh@1363 -- # nb=1310720 00:18:18.973 15:43:40 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:18:18.973 15:43:40 -- common/autotest_common.sh@1367 -- # echo 5120 00:18:18.973 15:43:40 -- ftl/common.sh@63 -- # base_size=5120 00:18:18.973 15:43:40 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:18.973 15:43:40 -- ftl/common.sh@67 -- # clear_lvols 00:18:18.973 15:43:40 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:18.973 15:43:40 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:19.232 15:43:40 -- ftl/common.sh@28 -- # stores=b3f8fcd2-dbf5-4651-9dfa-eb7d100f4993 00:18:19.232 15:43:40 -- ftl/common.sh@29 -- # for lvs in $stores 00:18:19.232 15:43:40 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b3f8fcd2-dbf5-4651-9dfa-eb7d100f4993 00:18:19.490 15:43:40 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:19.748 15:43:41 -- ftl/common.sh@68 -- # lvs=68681655-294f-4645-ac8d-ef325efab724 00:18:19.748 15:43:41 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 68681655-294f-4645-ac8d-ef325efab724 00:18:20.367 15:43:41 -- ftl/bdevperf.sh@23 -- # split_bdev=326f66c2-31ee-45f4-839e-a1f2c9adf458 00:18:20.367 15:43:41 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 326f66c2-31ee-45f4-839e-a1f2c9adf458 00:18:20.367 15:43:41 -- ftl/common.sh@35 -- # local name=nvc0 00:18:20.367 15:43:41 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:18:20.367 15:43:41 -- ftl/common.sh@37 -- # local base_bdev=326f66c2-31ee-45f4-839e-a1f2c9adf458 00:18:20.367 15:43:41 -- ftl/common.sh@38 -- # local cache_size= 00:18:20.367 15:43:41 -- ftl/common.sh@41 -- # get_bdev_size 326f66c2-31ee-45f4-839e-a1f2c9adf458 00:18:20.367 15:43:41 -- common/autotest_common.sh@1357 -- # local bdev_name=326f66c2-31ee-45f4-839e-a1f2c9adf458 00:18:20.367 15:43:41 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:20.367 15:43:41 -- common/autotest_common.sh@1359 -- # local bs 00:18:20.367 15:43:41 -- common/autotest_common.sh@1360 -- # local nb 00:18:20.367 15:43:41 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 326f66c2-31ee-45f4-839e-a1f2c9adf458 00:18:20.367 15:43:41 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:20.367 { 00:18:20.367 "name": "326f66c2-31ee-45f4-839e-a1f2c9adf458", 00:18:20.367 "aliases": [ 00:18:20.367 "lvs/nvme0n1p0" 00:18:20.367 ], 00:18:20.367 "product_name": "Logical Volume", 00:18:20.367 "block_size": 4096, 00:18:20.367 "num_blocks": 26476544, 00:18:20.367 "uuid": "326f66c2-31ee-45f4-839e-a1f2c9adf458", 00:18:20.367 "assigned_rate_limits": { 00:18:20.367 "rw_ios_per_sec": 0, 00:18:20.368 "rw_mbytes_per_sec": 0, 00:18:20.368 "r_mbytes_per_sec": 0, 00:18:20.368 "w_mbytes_per_sec": 0 00:18:20.368 }, 00:18:20.368 "claimed": false, 00:18:20.368 "zoned": false, 00:18:20.368 "supported_io_types": { 00:18:20.368 "read": true, 00:18:20.368 "write": true, 00:18:20.368 "unmap": true, 00:18:20.368 "write_zeroes": true, 00:18:20.368 "flush": false, 00:18:20.368 "reset": true, 00:18:20.368 "compare": false, 00:18:20.368 "compare_and_write": false, 00:18:20.368 "abort": false, 00:18:20.368 "nvme_admin": false, 00:18:20.368 "nvme_io": false 00:18:20.368 }, 00:18:20.368 "driver_specific": { 00:18:20.368 "lvol": { 00:18:20.368 "lvol_store_uuid": "68681655-294f-4645-ac8d-ef325efab724", 00:18:20.368 "base_bdev": "nvme0n1", 00:18:20.368 "thin_provision": true, 00:18:20.368 "snapshot": false, 00:18:20.368 "clone": false, 00:18:20.368 "esnap_clone": false 00:18:20.368 } 00:18:20.368 } 00:18:20.368 } 00:18:20.368 ]' 00:18:20.368 15:43:41 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:20.368 15:43:41 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:20.368 15:43:41 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:20.625 15:43:41 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:20.625 15:43:41 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:20.625 15:43:41 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:20.625 15:43:41 -- ftl/common.sh@41 -- # local base_size=5171 00:18:20.625 15:43:41 -- ftl/common.sh@44 -- # local nvc_bdev 00:18:20.625 15:43:41 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:18:20.882 15:43:42 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:20.882 15:43:42 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:20.882 15:43:42 -- ftl/common.sh@48 -- # get_bdev_size 326f66c2-31ee-45f4-839e-a1f2c9adf458 00:18:20.882 15:43:42 -- common/autotest_common.sh@1357 -- # local bdev_name=326f66c2-31ee-45f4-839e-a1f2c9adf458 00:18:20.882 15:43:42 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:20.882 15:43:42 -- common/autotest_common.sh@1359 -- # local bs 00:18:20.882 15:43:42 -- common/autotest_common.sh@1360 -- # local nb 00:18:20.882 15:43:42 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 326f66c2-31ee-45f4-839e-a1f2c9adf458 00:18:21.140 15:43:42 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:21.140 { 00:18:21.140 "name": "326f66c2-31ee-45f4-839e-a1f2c9adf458", 00:18:21.140 "aliases": [ 00:18:21.140 "lvs/nvme0n1p0" 00:18:21.140 ], 00:18:21.140 "product_name": "Logical Volume", 00:18:21.140 "block_size": 4096, 00:18:21.140 "num_blocks": 26476544, 00:18:21.140 "uuid": "326f66c2-31ee-45f4-839e-a1f2c9adf458", 00:18:21.140 "assigned_rate_limits": { 00:18:21.140 "rw_ios_per_sec": 0, 00:18:21.140 "rw_mbytes_per_sec": 0, 00:18:21.140 "r_mbytes_per_sec": 0, 00:18:21.140 "w_mbytes_per_sec": 0 00:18:21.140 }, 00:18:21.140 "claimed": false, 00:18:21.140 "zoned": false, 00:18:21.140 "supported_io_types": { 00:18:21.140 "read": true, 00:18:21.140 "write": true, 00:18:21.140 "unmap": true, 00:18:21.140 "write_zeroes": true, 00:18:21.140 "flush": false, 00:18:21.140 "reset": true, 00:18:21.140 "compare": false, 00:18:21.140 "compare_and_write": false, 00:18:21.140 "abort": false, 00:18:21.140 "nvme_admin": false, 00:18:21.140 "nvme_io": false 00:18:21.140 }, 00:18:21.140 "driver_specific": { 00:18:21.140 "lvol": { 00:18:21.140 "lvol_store_uuid": "68681655-294f-4645-ac8d-ef325efab724", 00:18:21.140 "base_bdev": "nvme0n1", 00:18:21.140 "thin_provision": true, 00:18:21.140 "snapshot": false, 00:18:21.140 "clone": false, 00:18:21.140 "esnap_clone": false 00:18:21.140 } 00:18:21.140 } 00:18:21.140 } 00:18:21.140 ]' 00:18:21.140 15:43:42 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:21.140 15:43:42 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:21.140 15:43:42 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:21.140 15:43:42 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:21.140 15:43:42 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:21.140 15:43:42 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:21.140 15:43:42 -- ftl/common.sh@48 -- # cache_size=5171 00:18:21.140 15:43:42 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:21.397 15:43:42 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:18:21.397 15:43:42 -- ftl/bdevperf.sh@26 -- # get_bdev_size 326f66c2-31ee-45f4-839e-a1f2c9adf458 00:18:21.397 15:43:42 -- common/autotest_common.sh@1357 -- # local bdev_name=326f66c2-31ee-45f4-839e-a1f2c9adf458 00:18:21.397 15:43:42 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:21.397 15:43:42 -- common/autotest_common.sh@1359 -- # local bs 00:18:21.397 15:43:42 -- common/autotest_common.sh@1360 -- # local nb 00:18:21.397 15:43:42 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 326f66c2-31ee-45f4-839e-a1f2c9adf458 00:18:21.654 15:43:43 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:21.654 { 00:18:21.654 "name": "326f66c2-31ee-45f4-839e-a1f2c9adf458", 00:18:21.654 "aliases": [ 00:18:21.654 "lvs/nvme0n1p0" 00:18:21.654 ], 00:18:21.654 "product_name": "Logical Volume", 00:18:21.654 "block_size": 4096, 00:18:21.654 "num_blocks": 26476544, 00:18:21.654 "uuid": "326f66c2-31ee-45f4-839e-a1f2c9adf458", 00:18:21.654 "assigned_rate_limits": { 00:18:21.654 "rw_ios_per_sec": 0, 00:18:21.654 "rw_mbytes_per_sec": 0, 00:18:21.654 "r_mbytes_per_sec": 0, 00:18:21.654 "w_mbytes_per_sec": 0 00:18:21.654 }, 00:18:21.654 "claimed": false, 00:18:21.654 "zoned": false, 00:18:21.654 "supported_io_types": { 00:18:21.654 "read": true, 00:18:21.654 "write": true, 00:18:21.654 "unmap": true, 00:18:21.654 "write_zeroes": true, 00:18:21.654 "flush": false, 00:18:21.654 "reset": true, 00:18:21.654 "compare": false, 00:18:21.654 "compare_and_write": false, 00:18:21.654 "abort": false, 00:18:21.654 "nvme_admin": false, 00:18:21.654 "nvme_io": false 00:18:21.654 }, 00:18:21.654 "driver_specific": { 00:18:21.654 "lvol": { 00:18:21.654 "lvol_store_uuid": "68681655-294f-4645-ac8d-ef325efab724", 00:18:21.654 "base_bdev": "nvme0n1", 00:18:21.654 "thin_provision": true, 00:18:21.654 "snapshot": false, 00:18:21.654 "clone": false, 00:18:21.654 "esnap_clone": false 00:18:21.654 } 00:18:21.654 } 00:18:21.654 } 00:18:21.654 ]' 00:18:21.654 15:43:43 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:21.654 15:43:43 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:21.654 15:43:43 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:21.912 15:43:43 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:21.912 15:43:43 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:21.912 15:43:43 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:21.912 15:43:43 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:18:21.912 15:43:43 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 326f66c2-31ee-45f4-839e-a1f2c9adf458 -c nvc0n1p0 --l2p_dram_limit 20 00:18:22.171 [2024-07-24 15:43:43.536535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.171 [2024-07-24 15:43:43.536598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:22.171 [2024-07-24 15:43:43.536623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:22.171 [2024-07-24 15:43:43.536635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.171 [2024-07-24 15:43:43.536711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.171 [2024-07-24 15:43:43.536729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:22.171 [2024-07-24 15:43:43.536745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:22.171 [2024-07-24 15:43:43.536757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.171 [2024-07-24 15:43:43.536785] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:22.171 [2024-07-24 15:43:43.537766] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:22.171 [2024-07-24 15:43:43.537811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.171 [2024-07-24 15:43:43.537826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:22.171 [2024-07-24 15:43:43.537841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.029 ms 00:18:22.171 [2024-07-24 15:43:43.537852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.171 [2024-07-24 15:43:43.537983] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7e066178-d351-4fe5-a27d-3c399bea1eee 00:18:22.171 [2024-07-24 15:43:43.539016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.171 [2024-07-24 15:43:43.539061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:22.171 [2024-07-24 15:43:43.539077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:22.171 [2024-07-24 15:43:43.539107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.171 [2024-07-24 15:43:43.543558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.171 [2024-07-24 15:43:43.543605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:22.171 [2024-07-24 15:43:43.543621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.400 ms 00:18:22.171 [2024-07-24 15:43:43.543638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.171 [2024-07-24 15:43:43.543753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.171 [2024-07-24 15:43:43.543775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:22.171 [2024-07-24 15:43:43.543788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:22.171 [2024-07-24 15:43:43.543806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.171 [2024-07-24 15:43:43.543868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.171 [2024-07-24 15:43:43.543889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:22.171 [2024-07-24 15:43:43.543902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:22.171 [2024-07-24 15:43:43.543916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.171 [2024-07-24 15:43:43.543950] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:22.171 [2024-07-24 15:43:43.548471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.171 [2024-07-24 15:43:43.548507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:22.171 [2024-07-24 15:43:43.548530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.529 ms 00:18:22.171 [2024-07-24 15:43:43.548542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.171 [2024-07-24 15:43:43.548587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.171 [2024-07-24 15:43:43.548603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:22.171 [2024-07-24 15:43:43.548617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:22.171 [2024-07-24 15:43:43.548629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.171 [2024-07-24 15:43:43.548695] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:22.171 [2024-07-24 15:43:43.548839] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:22.171 [2024-07-24 15:43:43.548865] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:22.171 [2024-07-24 15:43:43.548880] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:22.171 [2024-07-24 15:43:43.548897] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:22.171 [2024-07-24 15:43:43.548920] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:22.171 [2024-07-24 15:43:43.548934] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:22.171 [2024-07-24 15:43:43.548945] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:22.171 [2024-07-24 15:43:43.548960] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:22.171 [2024-07-24 15:43:43.548971] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:22.171 [2024-07-24 15:43:43.548988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.171 [2024-07-24 15:43:43.548999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:22.171 [2024-07-24 15:43:43.549014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:18:22.171 [2024-07-24 15:43:43.549025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.171 [2024-07-24 15:43:43.549117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.171 [2024-07-24 15:43:43.549134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:22.171 [2024-07-24 15:43:43.549148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:22.171 [2024-07-24 15:43:43.549159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.171 [2024-07-24 15:43:43.549242] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:22.171 [2024-07-24 15:43:43.549259] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:22.171 [2024-07-24 15:43:43.549274] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.171 [2024-07-24 15:43:43.549285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.171 [2024-07-24 15:43:43.549299] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:22.171 [2024-07-24 15:43:43.549309] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:22.172 [2024-07-24 15:43:43.549322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:22.172 [2024-07-24 15:43:43.549333] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:22.172 [2024-07-24 15:43:43.549357] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:22.172 [2024-07-24 15:43:43.549368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.172 [2024-07-24 15:43:43.549383] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:22.172 [2024-07-24 15:43:43.549394] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:22.172 [2024-07-24 15:43:43.549406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.172 [2024-07-24 15:43:43.549420] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:22.172 [2024-07-24 15:43:43.549433] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:22.172 [2024-07-24 15:43:43.549450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.172 [2024-07-24 15:43:43.549465] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:22.172 [2024-07-24 15:43:43.549476] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:22.172 [2024-07-24 15:43:43.549488] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.172 [2024-07-24 15:43:43.549498] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:22.172 [2024-07-24 15:43:43.549510] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:22.172 [2024-07-24 15:43:43.549521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:22.172 [2024-07-24 15:43:43.549533] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:22.172 [2024-07-24 15:43:43.549544] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:22.172 [2024-07-24 15:43:43.549556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:22.172 [2024-07-24 15:43:43.549566] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:22.172 [2024-07-24 15:43:43.549578] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:22.172 [2024-07-24 15:43:43.549589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:22.172 [2024-07-24 15:43:43.549600] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:22.172 [2024-07-24 15:43:43.549611] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:22.172 [2024-07-24 15:43:43.549623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:22.172 [2024-07-24 15:43:43.549634] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:22.172 [2024-07-24 15:43:43.549648] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:22.172 [2024-07-24 15:43:43.549658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:22.172 [2024-07-24 15:43:43.549670] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:22.172 [2024-07-24 15:43:43.549681] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:22.172 [2024-07-24 15:43:43.549695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.172 [2024-07-24 15:43:43.549705] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:22.172 [2024-07-24 15:43:43.549717] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:22.172 [2024-07-24 15:43:43.549727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.172 [2024-07-24 15:43:43.549739] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:22.172 [2024-07-24 15:43:43.549751] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:22.172 [2024-07-24 15:43:43.549764] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.172 [2024-07-24 15:43:43.549775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.172 [2024-07-24 15:43:43.549788] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:22.172 [2024-07-24 15:43:43.549802] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:22.172 [2024-07-24 15:43:43.549815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:22.172 [2024-07-24 15:43:43.549825] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:22.172 [2024-07-24 15:43:43.549839] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:22.172 [2024-07-24 15:43:43.549850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:22.172 [2024-07-24 15:43:43.549863] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:22.172 [2024-07-24 15:43:43.549878] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.172 [2024-07-24 15:43:43.549893] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:22.172 [2024-07-24 15:43:43.549904] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:22.172 [2024-07-24 15:43:43.549918] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:22.172 [2024-07-24 15:43:43.549929] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:22.172 [2024-07-24 15:43:43.549942] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:22.172 [2024-07-24 15:43:43.549955] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:22.172 [2024-07-24 15:43:43.549968] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:22.172 [2024-07-24 15:43:43.549980] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:22.172 [2024-07-24 15:43:43.549993] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:22.172 [2024-07-24 15:43:43.550005] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:22.172 [2024-07-24 15:43:43.550018] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:22.172 [2024-07-24 15:43:43.550029] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:22.172 [2024-07-24 15:43:43.550047] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:22.172 [2024-07-24 15:43:43.550058] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:22.172 [2024-07-24 15:43:43.550073] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.172 [2024-07-24 15:43:43.550102] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:22.172 [2024-07-24 15:43:43.550119] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:22.172 [2024-07-24 15:43:43.550131] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:22.172 [2024-07-24 15:43:43.550144] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:22.172 [2024-07-24 15:43:43.550157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.172 [2024-07-24 15:43:43.550171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:22.172 [2024-07-24 15:43:43.550183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:18:22.172 [2024-07-24 15:43:43.550196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.172 [2024-07-24 15:43:43.568192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.172 [2024-07-24 15:43:43.568242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:22.172 [2024-07-24 15:43:43.568261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.948 ms 00:18:22.172 [2024-07-24 15:43:43.568274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.172 [2024-07-24 15:43:43.568377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.172 [2024-07-24 15:43:43.568397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:22.172 [2024-07-24 15:43:43.568410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:22.172 [2024-07-24 15:43:43.568423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.172 [2024-07-24 15:43:43.619325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.172 [2024-07-24 15:43:43.619383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:22.172 [2024-07-24 15:43:43.619402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.833 ms 00:18:22.172 [2024-07-24 15:43:43.619417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.172 [2024-07-24 15:43:43.619467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.172 [2024-07-24 15:43:43.619486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:22.172 [2024-07-24 15:43:43.619499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:22.172 [2024-07-24 15:43:43.619515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.172 [2024-07-24 15:43:43.619892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.172 [2024-07-24 15:43:43.619928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:22.172 [2024-07-24 15:43:43.619943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:18:22.172 [2024-07-24 15:43:43.619956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.172 [2024-07-24 15:43:43.620109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.172 [2024-07-24 15:43:43.620141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:22.172 [2024-07-24 15:43:43.620156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:18:22.172 [2024-07-24 15:43:43.620169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.172 [2024-07-24 15:43:43.637387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.172 [2024-07-24 15:43:43.637436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:22.172 [2024-07-24 15:43:43.637455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.193 ms 00:18:22.172 [2024-07-24 15:43:43.637468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.173 [2024-07-24 15:43:43.650835] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:22.173 [2024-07-24 15:43:43.655750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.173 [2024-07-24 15:43:43.655788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:22.173 [2024-07-24 15:43:43.655808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.158 ms 00:18:22.173 [2024-07-24 15:43:43.655821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.173 [2024-07-24 15:43:43.716600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.173 [2024-07-24 15:43:43.716661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:22.173 [2024-07-24 15:43:43.716685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.735 ms 00:18:22.173 [2024-07-24 15:43:43.716698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.173 [2024-07-24 15:43:43.716757] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:18:22.173 [2024-07-24 15:43:43.716778] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:18:24.701 [2024-07-24 15:43:45.780760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.701 [2024-07-24 15:43:45.780834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:24.701 [2024-07-24 15:43:45.780859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2064.010 ms 00:18:24.701 [2024-07-24 15:43:45.780872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.701 [2024-07-24 15:43:45.781127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.701 [2024-07-24 15:43:45.781155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:24.701 [2024-07-24 15:43:45.781173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:18:24.701 [2024-07-24 15:43:45.781185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.701 [2024-07-24 15:43:45.812237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.701 [2024-07-24 15:43:45.812282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:24.701 [2024-07-24 15:43:45.812302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.981 ms 00:18:24.701 [2024-07-24 15:43:45.812315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.701 [2024-07-24 15:43:45.842785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.701 [2024-07-24 15:43:45.842842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:24.701 [2024-07-24 15:43:45.842867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.416 ms 00:18:24.701 [2024-07-24 15:43:45.842878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.701 [2024-07-24 15:43:45.843299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.701 [2024-07-24 15:43:45.843345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:24.701 [2024-07-24 15:43:45.843363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:18:24.701 [2024-07-24 15:43:45.843377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.701 [2024-07-24 15:43:45.920688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.701 [2024-07-24 15:43:45.920746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:24.701 [2024-07-24 15:43:45.920768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.243 ms 00:18:24.701 [2024-07-24 15:43:45.920781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.701 [2024-07-24 15:43:45.952601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.701 [2024-07-24 15:43:45.952646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:24.701 [2024-07-24 15:43:45.952666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.766 ms 00:18:24.701 [2024-07-24 15:43:45.952679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.701 [2024-07-24 15:43:45.954603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.701 [2024-07-24 15:43:45.954643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:24.701 [2024-07-24 15:43:45.954663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.877 ms 00:18:24.701 [2024-07-24 15:43:45.954675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.701 [2024-07-24 15:43:45.985835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.701 [2024-07-24 15:43:45.985887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:24.701 [2024-07-24 15:43:45.985909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.092 ms 00:18:24.701 [2024-07-24 15:43:45.985920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.701 [2024-07-24 15:43:45.985978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.701 [2024-07-24 15:43:45.985997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:24.701 [2024-07-24 15:43:45.986013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:24.701 [2024-07-24 15:43:45.986025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.701 [2024-07-24 15:43:45.986178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.701 [2024-07-24 15:43:45.986198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:24.701 [2024-07-24 15:43:45.986224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:24.701 [2024-07-24 15:43:45.986235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.701 [2024-07-24 15:43:45.987253] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2450.199 ms, result 0 00:18:24.701 { 00:18:24.701 "name": "ftl0", 00:18:24.701 "uuid": "7e066178-d351-4fe5-a27d-3c399bea1eee" 00:18:24.701 } 00:18:24.701 15:43:46 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:18:24.701 15:43:46 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:24.701 15:43:46 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:18:24.701 15:43:46 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:24.959 [2024-07-24 15:43:46.403725] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:24.959 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:24.959 Zero copy mechanism will not be used. 00:18:24.959 Running I/O for 4 seconds... 00:18:29.142 00:18:29.142 Latency(us) 00:18:29.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.142 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:29.142 ftl0 : 4.00 2033.71 135.05 0.00 0.00 514.90 238.31 1280.93 00:18:29.142 =================================================================================================================== 00:18:29.142 Total : 2033.71 135.05 0.00 0.00 514.90 238.31 1280.93 00:18:29.142 0 00:18:29.142 [2024-07-24 15:43:50.413870] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:29.142 15:43:50 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:29.142 [2024-07-24 15:43:50.524716] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:29.142 Running I/O for 4 seconds... 00:18:33.322 00:18:33.322 Latency(us) 00:18:33.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.322 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:33.322 ftl0 : 4.02 7048.67 27.53 0.00 0.00 18113.16 363.05 56480.12 00:18:33.322 =================================================================================================================== 00:18:33.322 Total : 7048.67 27.53 0.00 0.00 18113.16 0.00 56480.12 00:18:33.322 [2024-07-24 15:43:54.552192] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:33.322 0 00:18:33.322 15:43:54 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:33.322 [2024-07-24 15:43:54.674204] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:33.322 Running I/O for 4 seconds... 00:18:37.521 00:18:37.521 Latency(us) 00:18:37.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.521 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:37.521 Verification LBA range: start 0x0 length 0x1400000 00:18:37.521 ftl0 : 4.01 8304.53 32.44 0.00 0.00 15372.00 222.49 41466.41 00:18:37.521 =================================================================================================================== 00:18:37.521 Total : 8304.53 32.44 0.00 0.00 15372.00 0.00 41466.41 00:18:37.521 0 00:18:37.521 [2024-07-24 15:43:58.699475] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:37.521 15:43:58 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:37.521 [2024-07-24 15:43:58.976518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.521 [2024-07-24 15:43:58.976581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:37.521 [2024-07-24 15:43:58.976607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:37.521 [2024-07-24 15:43:58.976620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.521 [2024-07-24 15:43:58.976656] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:37.521 [2024-07-24 15:43:58.979981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.521 [2024-07-24 15:43:58.980015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:37.521 [2024-07-24 15:43:58.980031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.301 ms 00:18:37.521 [2024-07-24 15:43:58.980051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.521 [2024-07-24 15:43:58.981413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.521 [2024-07-24 15:43:58.981457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:37.521 [2024-07-24 15:43:58.981474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.335 ms 00:18:37.521 [2024-07-24 15:43:58.981488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.780 [2024-07-24 15:43:59.167259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.780 [2024-07-24 15:43:59.167358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:37.780 [2024-07-24 15:43:59.167382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 185.742 ms 00:18:37.780 [2024-07-24 15:43:59.167397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.780 [2024-07-24 15:43:59.174152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.780 [2024-07-24 15:43:59.174221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:37.780 [2024-07-24 15:43:59.174243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.693 ms 00:18:37.780 [2024-07-24 15:43:59.174257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.780 [2024-07-24 15:43:59.207187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.780 [2024-07-24 15:43:59.207261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:37.780 [2024-07-24 15:43:59.207283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.796 ms 00:18:37.780 [2024-07-24 15:43:59.207301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.780 [2024-07-24 15:43:59.225828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.780 [2024-07-24 15:43:59.225885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:37.780 [2024-07-24 15:43:59.225905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.452 ms 00:18:37.780 [2024-07-24 15:43:59.225920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.780 [2024-07-24 15:43:59.226132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.780 [2024-07-24 15:43:59.226164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:37.780 [2024-07-24 15:43:59.226182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:18:37.780 [2024-07-24 15:43:59.226196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.780 [2024-07-24 15:43:59.257277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.780 [2024-07-24 15:43:59.257350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:37.780 [2024-07-24 15:43:59.257370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.058 ms 00:18:37.780 [2024-07-24 15:43:59.257384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.780 [2024-07-24 15:43:59.288172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.780 [2024-07-24 15:43:59.288221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:37.780 [2024-07-24 15:43:59.288239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.738 ms 00:18:37.780 [2024-07-24 15:43:59.288256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.780 [2024-07-24 15:43:59.318887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.780 [2024-07-24 15:43:59.318952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:37.780 [2024-07-24 15:43:59.318973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.582 ms 00:18:37.780 [2024-07-24 15:43:59.318987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.780 [2024-07-24 15:43:59.349802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.780 [2024-07-24 15:43:59.349858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:37.780 [2024-07-24 15:43:59.349877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.655 ms 00:18:37.780 [2024-07-24 15:43:59.349891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.780 [2024-07-24 15:43:59.349943] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:37.780 [2024-07-24 15:43:59.349971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:37.780 [2024-07-24 15:43:59.349987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:37.780 [2024-07-24 15:43:59.350001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:37.780 [2024-07-24 15:43:59.350013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:37.780 [2024-07-24 15:43:59.350027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:37.780 [2024-07-24 15:43:59.350039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:37.780 [2024-07-24 15:43:59.350057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:37.780 [2024-07-24 15:43:59.350069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:37.780 [2024-07-24 15:43:59.350082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.350990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.351002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.351015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.351027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.351040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.351052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.351065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.351077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.351104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.351118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.351138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.351150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:37.781 [2024-07-24 15:43:59.351166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:37.782 [2024-07-24 15:43:59.351177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:37.782 [2024-07-24 15:43:59.351191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:37.782 [2024-07-24 15:43:59.351202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:37.782 [2024-07-24 15:43:59.351216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:37.782 [2024-07-24 15:43:59.351228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:37.782 [2024-07-24 15:43:59.351245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:37.782 [2024-07-24 15:43:59.351257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:37.782 [2024-07-24 15:43:59.351271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:37.782 [2024-07-24 15:43:59.351282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:37.782 [2024-07-24 15:43:59.351296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:37.782 [2024-07-24 15:43:59.351308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:37.782 [2024-07-24 15:43:59.351322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:37.782 [2024-07-24 15:43:59.351334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:37.782 [2024-07-24 15:43:59.351358] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:37.782 [2024-07-24 15:43:59.351371] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7e066178-d351-4fe5-a27d-3c399bea1eee 00:18:37.782 [2024-07-24 15:43:59.351386] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:37.782 [2024-07-24 15:43:59.351398] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:37.782 [2024-07-24 15:43:59.351410] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:37.782 [2024-07-24 15:43:59.351422] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:37.782 [2024-07-24 15:43:59.351435] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:37.782 [2024-07-24 15:43:59.351446] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:37.782 [2024-07-24 15:43:59.351460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:37.782 [2024-07-24 15:43:59.351470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:37.782 [2024-07-24 15:43:59.351482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:37.782 [2024-07-24 15:43:59.351493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.782 [2024-07-24 15:43:59.351510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:37.782 [2024-07-24 15:43:59.351523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.552 ms 00:18:37.782 [2024-07-24 15:43:59.351536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.782 [2024-07-24 15:43:59.368077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.782 [2024-07-24 15:43:59.368143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:37.782 [2024-07-24 15:43:59.368162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.469 ms 00:18:37.782 [2024-07-24 15:43:59.368178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.782 [2024-07-24 15:43:59.368425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.782 [2024-07-24 15:43:59.368444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:37.782 [2024-07-24 15:43:59.368458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:18:37.782 [2024-07-24 15:43:59.368471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.040 [2024-07-24 15:43:59.417152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.040 [2024-07-24 15:43:59.417215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:38.040 [2024-07-24 15:43:59.417234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.040 [2024-07-24 15:43:59.417249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.040 [2024-07-24 15:43:59.417328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.040 [2024-07-24 15:43:59.417346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:38.040 [2024-07-24 15:43:59.417359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.040 [2024-07-24 15:43:59.417372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.040 [2024-07-24 15:43:59.417484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.040 [2024-07-24 15:43:59.417508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:38.040 [2024-07-24 15:43:59.417522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.040 [2024-07-24 15:43:59.417537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.040 [2024-07-24 15:43:59.417561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.040 [2024-07-24 15:43:59.417580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:38.040 [2024-07-24 15:43:59.417593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.040 [2024-07-24 15:43:59.417605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.040 [2024-07-24 15:43:59.517212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.040 [2024-07-24 15:43:59.517282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:38.040 [2024-07-24 15:43:59.517302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.040 [2024-07-24 15:43:59.517316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.040 [2024-07-24 15:43:59.556621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.040 [2024-07-24 15:43:59.556711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:38.040 [2024-07-24 15:43:59.556740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.040 [2024-07-24 15:43:59.556756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.040 [2024-07-24 15:43:59.556885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.040 [2024-07-24 15:43:59.556911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:38.040 [2024-07-24 15:43:59.556926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.040 [2024-07-24 15:43:59.556948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.040 [2024-07-24 15:43:59.557022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.040 [2024-07-24 15:43:59.557047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:38.040 [2024-07-24 15:43:59.557071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.040 [2024-07-24 15:43:59.557120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.040 [2024-07-24 15:43:59.557275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.040 [2024-07-24 15:43:59.557313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:38.040 [2024-07-24 15:43:59.557330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.040 [2024-07-24 15:43:59.557344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.040 [2024-07-24 15:43:59.557423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.040 [2024-07-24 15:43:59.557448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:38.040 [2024-07-24 15:43:59.557462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.041 [2024-07-24 15:43:59.557478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.041 [2024-07-24 15:43:59.557525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.041 [2024-07-24 15:43:59.557544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:38.041 [2024-07-24 15:43:59.557565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.041 [2024-07-24 15:43:59.557589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.041 [2024-07-24 15:43:59.557659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.041 [2024-07-24 15:43:59.557683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:38.041 [2024-07-24 15:43:59.557699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.041 [2024-07-24 15:43:59.557713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.041 [2024-07-24 15:43:59.557893] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 581.332 ms, result 0 00:18:38.041 true 00:18:38.041 15:43:59 -- ftl/bdevperf.sh@37 -- # killprocess 72478 00:18:38.041 15:43:59 -- common/autotest_common.sh@926 -- # '[' -z 72478 ']' 00:18:38.041 15:43:59 -- common/autotest_common.sh@930 -- # kill -0 72478 00:18:38.041 15:43:59 -- common/autotest_common.sh@931 -- # uname 00:18:38.041 15:43:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:38.041 15:43:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72478 00:18:38.041 killing process with pid 72478 00:18:38.041 Received shutdown signal, test time was about 4.000000 seconds 00:18:38.041 00:18:38.041 Latency(us) 00:18:38.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.041 =================================================================================================================== 00:18:38.041 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:38.041 15:43:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:38.041 15:43:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:38.041 15:43:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72478' 00:18:38.041 15:43:59 -- common/autotest_common.sh@945 -- # kill 72478 00:18:38.041 15:43:59 -- common/autotest_common.sh@950 -- # wait 72478 00:18:42.221 15:44:03 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:18:42.221 15:44:03 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:18:42.221 15:44:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:42.221 15:44:03 -- common/autotest_common.sh@10 -- # set +x 00:18:42.221 15:44:03 -- ftl/bdevperf.sh@41 -- # remove_shm 00:18:42.221 Remove shared memory files 00:18:42.221 15:44:03 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:42.221 15:44:03 -- ftl/common.sh@205 -- # rm -f rm -f 00:18:42.221 15:44:03 -- ftl/common.sh@206 -- # rm -f rm -f 00:18:42.221 15:44:03 -- ftl/common.sh@207 -- # rm -f rm -f 00:18:42.221 15:44:03 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:42.221 15:44:03 -- ftl/common.sh@209 -- # rm -f rm -f 00:18:42.221 00:18:42.221 real 0m24.934s 00:18:42.221 user 0m28.738s 00:18:42.221 sys 0m1.155s 00:18:42.221 15:44:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:42.221 15:44:03 -- common/autotest_common.sh@10 -- # set +x 00:18:42.221 ************************************ 00:18:42.221 END TEST ftl_bdevperf 00:18:42.221 ************************************ 00:18:42.221 15:44:03 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:18:42.221 15:44:03 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:42.221 15:44:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:42.221 15:44:03 -- common/autotest_common.sh@10 -- # set +x 00:18:42.221 ************************************ 00:18:42.221 START TEST ftl_trim 00:18:42.221 ************************************ 00:18:42.221 15:44:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:18:42.221 * Looking for test storage... 00:18:42.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:42.221 15:44:03 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:42.221 15:44:03 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:42.221 15:44:03 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:42.221 15:44:03 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:42.221 15:44:03 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:42.221 15:44:03 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:42.221 15:44:03 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:42.221 15:44:03 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:42.221 15:44:03 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:42.221 15:44:03 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:42.221 15:44:03 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:42.221 15:44:03 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:42.221 15:44:03 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:42.221 15:44:03 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:42.221 15:44:03 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:42.222 15:44:03 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:42.222 15:44:03 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:42.222 15:44:03 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:42.222 15:44:03 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:42.222 15:44:03 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:42.222 15:44:03 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:42.222 15:44:03 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:42.222 15:44:03 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:42.222 15:44:03 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:42.222 15:44:03 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:42.222 15:44:03 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:42.222 15:44:03 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:42.222 15:44:03 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:42.222 15:44:03 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:42.222 15:44:03 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:42.222 15:44:03 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:18:42.222 15:44:03 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:18:42.222 15:44:03 -- ftl/trim.sh@25 -- # timeout=240 00:18:42.222 15:44:03 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:42.222 15:44:03 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:42.222 15:44:03 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:42.222 15:44:03 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:42.222 15:44:03 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:42.222 15:44:03 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:42.222 15:44:03 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:42.222 15:44:03 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:42.222 15:44:03 -- ftl/trim.sh@40 -- # svcpid=72841 00:18:42.222 15:44:03 -- ftl/trim.sh@41 -- # waitforlisten 72841 00:18:42.222 15:44:03 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:42.222 15:44:03 -- common/autotest_common.sh@819 -- # '[' -z 72841 ']' 00:18:42.222 15:44:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.222 15:44:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:42.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.222 15:44:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.222 15:44:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:42.222 15:44:03 -- common/autotest_common.sh@10 -- # set +x 00:18:42.222 [2024-07-24 15:44:03.585763] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:42.222 [2024-07-24 15:44:03.585919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72841 ] 00:18:42.222 [2024-07-24 15:44:03.755197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:42.479 [2024-07-24 15:44:03.961697] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:42.479 [2024-07-24 15:44:03.962029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.479 [2024-07-24 15:44:03.962138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.479 [2024-07-24 15:44:03.962157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.849 15:44:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:43.849 15:44:05 -- common/autotest_common.sh@852 -- # return 0 00:18:43.850 15:44:05 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:18:43.850 15:44:05 -- ftl/common.sh@54 -- # local name=nvme0 00:18:43.850 15:44:05 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:18:43.850 15:44:05 -- ftl/common.sh@56 -- # local size=103424 00:18:43.850 15:44:05 -- ftl/common.sh@59 -- # local base_bdev 00:18:43.850 15:44:05 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:18:44.107 15:44:05 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:44.107 15:44:05 -- ftl/common.sh@62 -- # local base_size 00:18:44.107 15:44:05 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:44.107 15:44:05 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:18:44.107 15:44:05 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:44.108 15:44:05 -- common/autotest_common.sh@1359 -- # local bs 00:18:44.108 15:44:05 -- common/autotest_common.sh@1360 -- # local nb 00:18:44.108 15:44:05 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:44.365 15:44:05 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:44.365 { 00:18:44.365 "name": "nvme0n1", 00:18:44.365 "aliases": [ 00:18:44.365 "124fe694-acaa-4f11-b9da-aaf6d30c4607" 00:18:44.365 ], 00:18:44.365 "product_name": "NVMe disk", 00:18:44.365 "block_size": 4096, 00:18:44.365 "num_blocks": 1310720, 00:18:44.365 "uuid": "124fe694-acaa-4f11-b9da-aaf6d30c4607", 00:18:44.365 "assigned_rate_limits": { 00:18:44.365 "rw_ios_per_sec": 0, 00:18:44.365 "rw_mbytes_per_sec": 0, 00:18:44.365 "r_mbytes_per_sec": 0, 00:18:44.365 "w_mbytes_per_sec": 0 00:18:44.365 }, 00:18:44.365 "claimed": true, 00:18:44.365 "claim_type": "read_many_write_one", 00:18:44.365 "zoned": false, 00:18:44.365 "supported_io_types": { 00:18:44.365 "read": true, 00:18:44.365 "write": true, 00:18:44.365 "unmap": true, 00:18:44.365 "write_zeroes": true, 00:18:44.365 "flush": true, 00:18:44.365 "reset": true, 00:18:44.365 "compare": true, 00:18:44.365 "compare_and_write": false, 00:18:44.365 "abort": true, 00:18:44.365 "nvme_admin": true, 00:18:44.365 "nvme_io": true 00:18:44.365 }, 00:18:44.365 "driver_specific": { 00:18:44.365 "nvme": [ 00:18:44.365 { 00:18:44.365 "pci_address": "0000:00:07.0", 00:18:44.365 "trid": { 00:18:44.365 "trtype": "PCIe", 00:18:44.365 "traddr": "0000:00:07.0" 00:18:44.365 }, 00:18:44.365 "ctrlr_data": { 00:18:44.365 "cntlid": 0, 00:18:44.365 "vendor_id": "0x1b36", 00:18:44.365 "model_number": "QEMU NVMe Ctrl", 00:18:44.365 "serial_number": "12341", 00:18:44.365 "firmware_revision": "8.0.0", 00:18:44.365 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:44.365 "oacs": { 00:18:44.365 "security": 0, 00:18:44.365 "format": 1, 00:18:44.365 "firmware": 0, 00:18:44.365 "ns_manage": 1 00:18:44.365 }, 00:18:44.365 "multi_ctrlr": false, 00:18:44.365 "ana_reporting": false 00:18:44.365 }, 00:18:44.365 "vs": { 00:18:44.365 "nvme_version": "1.4" 00:18:44.365 }, 00:18:44.365 "ns_data": { 00:18:44.365 "id": 1, 00:18:44.365 "can_share": false 00:18:44.365 } 00:18:44.365 } 00:18:44.365 ], 00:18:44.365 "mp_policy": "active_passive" 00:18:44.365 } 00:18:44.365 } 00:18:44.365 ]' 00:18:44.365 15:44:05 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:44.365 15:44:05 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:44.365 15:44:05 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:44.622 15:44:05 -- common/autotest_common.sh@1363 -- # nb=1310720 00:18:44.622 15:44:05 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:18:44.622 15:44:05 -- common/autotest_common.sh@1367 -- # echo 5120 00:18:44.622 15:44:05 -- ftl/common.sh@63 -- # base_size=5120 00:18:44.622 15:44:05 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:44.622 15:44:05 -- ftl/common.sh@67 -- # clear_lvols 00:18:44.622 15:44:05 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:44.622 15:44:05 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:44.879 15:44:06 -- ftl/common.sh@28 -- # stores=68681655-294f-4645-ac8d-ef325efab724 00:18:44.879 15:44:06 -- ftl/common.sh@29 -- # for lvs in $stores 00:18:44.879 15:44:06 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 68681655-294f-4645-ac8d-ef325efab724 00:18:45.136 15:44:06 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:45.395 15:44:06 -- ftl/common.sh@68 -- # lvs=999cf7d9-32e4-4432-b6dc-a2c36408958b 00:18:45.395 15:44:06 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 999cf7d9-32e4-4432-b6dc-a2c36408958b 00:18:45.653 15:44:07 -- ftl/trim.sh@43 -- # split_bdev=2392fd2c-09a0-4e68-89cd-fc94295a11ef 00:18:45.653 15:44:07 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 2392fd2c-09a0-4e68-89cd-fc94295a11ef 00:18:45.653 15:44:07 -- ftl/common.sh@35 -- # local name=nvc0 00:18:45.653 15:44:07 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:18:45.653 15:44:07 -- ftl/common.sh@37 -- # local base_bdev=2392fd2c-09a0-4e68-89cd-fc94295a11ef 00:18:45.653 15:44:07 -- ftl/common.sh@38 -- # local cache_size= 00:18:45.653 15:44:07 -- ftl/common.sh@41 -- # get_bdev_size 2392fd2c-09a0-4e68-89cd-fc94295a11ef 00:18:45.653 15:44:07 -- common/autotest_common.sh@1357 -- # local bdev_name=2392fd2c-09a0-4e68-89cd-fc94295a11ef 00:18:45.653 15:44:07 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:45.653 15:44:07 -- common/autotest_common.sh@1359 -- # local bs 00:18:45.653 15:44:07 -- common/autotest_common.sh@1360 -- # local nb 00:18:45.653 15:44:07 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2392fd2c-09a0-4e68-89cd-fc94295a11ef 00:18:45.911 15:44:07 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:45.911 { 00:18:45.911 "name": "2392fd2c-09a0-4e68-89cd-fc94295a11ef", 00:18:45.911 "aliases": [ 00:18:45.911 "lvs/nvme0n1p0" 00:18:45.911 ], 00:18:45.911 "product_name": "Logical Volume", 00:18:45.911 "block_size": 4096, 00:18:45.911 "num_blocks": 26476544, 00:18:45.911 "uuid": "2392fd2c-09a0-4e68-89cd-fc94295a11ef", 00:18:45.911 "assigned_rate_limits": { 00:18:45.911 "rw_ios_per_sec": 0, 00:18:45.911 "rw_mbytes_per_sec": 0, 00:18:45.911 "r_mbytes_per_sec": 0, 00:18:45.911 "w_mbytes_per_sec": 0 00:18:45.911 }, 00:18:45.911 "claimed": false, 00:18:45.911 "zoned": false, 00:18:45.911 "supported_io_types": { 00:18:45.911 "read": true, 00:18:45.911 "write": true, 00:18:45.911 "unmap": true, 00:18:45.911 "write_zeroes": true, 00:18:45.911 "flush": false, 00:18:45.911 "reset": true, 00:18:45.911 "compare": false, 00:18:45.911 "compare_and_write": false, 00:18:45.911 "abort": false, 00:18:45.911 "nvme_admin": false, 00:18:45.911 "nvme_io": false 00:18:45.911 }, 00:18:45.911 "driver_specific": { 00:18:45.911 "lvol": { 00:18:45.911 "lvol_store_uuid": "999cf7d9-32e4-4432-b6dc-a2c36408958b", 00:18:45.911 "base_bdev": "nvme0n1", 00:18:45.911 "thin_provision": true, 00:18:45.911 "snapshot": false, 00:18:45.911 "clone": false, 00:18:45.911 "esnap_clone": false 00:18:45.911 } 00:18:45.911 } 00:18:45.911 } 00:18:45.911 ]' 00:18:45.911 15:44:07 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:45.911 15:44:07 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:45.911 15:44:07 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:45.911 15:44:07 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:45.911 15:44:07 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:45.911 15:44:07 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:45.911 15:44:07 -- ftl/common.sh@41 -- # local base_size=5171 00:18:45.911 15:44:07 -- ftl/common.sh@44 -- # local nvc_bdev 00:18:45.911 15:44:07 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:18:46.476 15:44:07 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:46.477 15:44:07 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:46.477 15:44:07 -- ftl/common.sh@48 -- # get_bdev_size 2392fd2c-09a0-4e68-89cd-fc94295a11ef 00:18:46.477 15:44:07 -- common/autotest_common.sh@1357 -- # local bdev_name=2392fd2c-09a0-4e68-89cd-fc94295a11ef 00:18:46.477 15:44:07 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:46.477 15:44:07 -- common/autotest_common.sh@1359 -- # local bs 00:18:46.477 15:44:07 -- common/autotest_common.sh@1360 -- # local nb 00:18:46.477 15:44:07 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2392fd2c-09a0-4e68-89cd-fc94295a11ef 00:18:46.477 15:44:08 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:46.477 { 00:18:46.477 "name": "2392fd2c-09a0-4e68-89cd-fc94295a11ef", 00:18:46.477 "aliases": [ 00:18:46.477 "lvs/nvme0n1p0" 00:18:46.477 ], 00:18:46.477 "product_name": "Logical Volume", 00:18:46.477 "block_size": 4096, 00:18:46.477 "num_blocks": 26476544, 00:18:46.477 "uuid": "2392fd2c-09a0-4e68-89cd-fc94295a11ef", 00:18:46.477 "assigned_rate_limits": { 00:18:46.477 "rw_ios_per_sec": 0, 00:18:46.477 "rw_mbytes_per_sec": 0, 00:18:46.477 "r_mbytes_per_sec": 0, 00:18:46.477 "w_mbytes_per_sec": 0 00:18:46.477 }, 00:18:46.477 "claimed": false, 00:18:46.477 "zoned": false, 00:18:46.477 "supported_io_types": { 00:18:46.477 "read": true, 00:18:46.477 "write": true, 00:18:46.477 "unmap": true, 00:18:46.477 "write_zeroes": true, 00:18:46.477 "flush": false, 00:18:46.477 "reset": true, 00:18:46.477 "compare": false, 00:18:46.477 "compare_and_write": false, 00:18:46.477 "abort": false, 00:18:46.477 "nvme_admin": false, 00:18:46.477 "nvme_io": false 00:18:46.477 }, 00:18:46.477 "driver_specific": { 00:18:46.477 "lvol": { 00:18:46.477 "lvol_store_uuid": "999cf7d9-32e4-4432-b6dc-a2c36408958b", 00:18:46.477 "base_bdev": "nvme0n1", 00:18:46.477 "thin_provision": true, 00:18:46.477 "snapshot": false, 00:18:46.477 "clone": false, 00:18:46.477 "esnap_clone": false 00:18:46.477 } 00:18:46.477 } 00:18:46.477 } 00:18:46.477 ]' 00:18:46.477 15:44:08 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:46.734 15:44:08 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:46.734 15:44:08 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:46.734 15:44:08 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:46.734 15:44:08 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:46.734 15:44:08 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:46.734 15:44:08 -- ftl/common.sh@48 -- # cache_size=5171 00:18:46.734 15:44:08 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:46.991 15:44:08 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:46.991 15:44:08 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:46.991 15:44:08 -- ftl/trim.sh@47 -- # get_bdev_size 2392fd2c-09a0-4e68-89cd-fc94295a11ef 00:18:46.991 15:44:08 -- common/autotest_common.sh@1357 -- # local bdev_name=2392fd2c-09a0-4e68-89cd-fc94295a11ef 00:18:46.991 15:44:08 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:46.991 15:44:08 -- common/autotest_common.sh@1359 -- # local bs 00:18:46.991 15:44:08 -- common/autotest_common.sh@1360 -- # local nb 00:18:46.991 15:44:08 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2392fd2c-09a0-4e68-89cd-fc94295a11ef 00:18:47.249 15:44:08 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:47.249 { 00:18:47.249 "name": "2392fd2c-09a0-4e68-89cd-fc94295a11ef", 00:18:47.249 "aliases": [ 00:18:47.249 "lvs/nvme0n1p0" 00:18:47.249 ], 00:18:47.249 "product_name": "Logical Volume", 00:18:47.249 "block_size": 4096, 00:18:47.249 "num_blocks": 26476544, 00:18:47.249 "uuid": "2392fd2c-09a0-4e68-89cd-fc94295a11ef", 00:18:47.249 "assigned_rate_limits": { 00:18:47.249 "rw_ios_per_sec": 0, 00:18:47.249 "rw_mbytes_per_sec": 0, 00:18:47.249 "r_mbytes_per_sec": 0, 00:18:47.249 "w_mbytes_per_sec": 0 00:18:47.249 }, 00:18:47.249 "claimed": false, 00:18:47.249 "zoned": false, 00:18:47.249 "supported_io_types": { 00:18:47.249 "read": true, 00:18:47.249 "write": true, 00:18:47.249 "unmap": true, 00:18:47.249 "write_zeroes": true, 00:18:47.249 "flush": false, 00:18:47.249 "reset": true, 00:18:47.249 "compare": false, 00:18:47.249 "compare_and_write": false, 00:18:47.249 "abort": false, 00:18:47.249 "nvme_admin": false, 00:18:47.249 "nvme_io": false 00:18:47.249 }, 00:18:47.249 "driver_specific": { 00:18:47.249 "lvol": { 00:18:47.249 "lvol_store_uuid": "999cf7d9-32e4-4432-b6dc-a2c36408958b", 00:18:47.249 "base_bdev": "nvme0n1", 00:18:47.249 "thin_provision": true, 00:18:47.249 "snapshot": false, 00:18:47.249 "clone": false, 00:18:47.249 "esnap_clone": false 00:18:47.249 } 00:18:47.249 } 00:18:47.249 } 00:18:47.249 ]' 00:18:47.249 15:44:08 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:47.249 15:44:08 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:47.249 15:44:08 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:47.507 15:44:08 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:47.507 15:44:08 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:47.507 15:44:08 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:47.507 15:44:08 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:47.507 15:44:08 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2392fd2c-09a0-4e68-89cd-fc94295a11ef -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:47.765 [2024-07-24 15:44:09.191731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.765 [2024-07-24 15:44:09.191831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:47.765 [2024-07-24 15:44:09.191882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:47.765 [2024-07-24 15:44:09.191903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.765 [2024-07-24 15:44:09.195727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.765 [2024-07-24 15:44:09.195786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:47.765 [2024-07-24 15:44:09.195809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.765 ms 00:18:47.765 [2024-07-24 15:44:09.195822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.765 [2024-07-24 15:44:09.196024] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:47.765 [2024-07-24 15:44:09.197026] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:47.765 [2024-07-24 15:44:09.197077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.765 [2024-07-24 15:44:09.197108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:47.765 [2024-07-24 15:44:09.197126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:18:47.765 [2024-07-24 15:44:09.197139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.765 [2024-07-24 15:44:09.197488] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 610a91da-4184-4e70-9b70-5012476db97f 00:18:47.765 [2024-07-24 15:44:09.198598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.765 [2024-07-24 15:44:09.198646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:47.765 [2024-07-24 15:44:09.198666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:47.765 [2024-07-24 15:44:09.198680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.766 [2024-07-24 15:44:09.203888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.766 [2024-07-24 15:44:09.203978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:47.766 [2024-07-24 15:44:09.204002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.095 ms 00:18:47.766 [2024-07-24 15:44:09.204043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.766 [2024-07-24 15:44:09.204362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.766 [2024-07-24 15:44:09.204407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:47.766 [2024-07-24 15:44:09.204434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:18:47.766 [2024-07-24 15:44:09.204463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.766 [2024-07-24 15:44:09.204526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.766 [2024-07-24 15:44:09.204547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:47.766 [2024-07-24 15:44:09.204563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:47.766 [2024-07-24 15:44:09.204577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.766 [2024-07-24 15:44:09.204631] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:47.766 [2024-07-24 15:44:09.209704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.766 [2024-07-24 15:44:09.209759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:47.766 [2024-07-24 15:44:09.209786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.081 ms 00:18:47.766 [2024-07-24 15:44:09.209808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.766 [2024-07-24 15:44:09.210010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.766 [2024-07-24 15:44:09.210040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:47.766 [2024-07-24 15:44:09.210066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:47.766 [2024-07-24 15:44:09.210109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.766 [2024-07-24 15:44:09.210176] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:47.766 [2024-07-24 15:44:09.210407] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:47.766 [2024-07-24 15:44:09.210453] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:47.766 [2024-07-24 15:44:09.210471] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:47.766 [2024-07-24 15:44:09.210489] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:47.766 [2024-07-24 15:44:09.210510] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:47.766 [2024-07-24 15:44:09.210525] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:47.766 [2024-07-24 15:44:09.210537] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:47.766 [2024-07-24 15:44:09.210555] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:47.766 [2024-07-24 15:44:09.210567] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:47.766 [2024-07-24 15:44:09.210581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.766 [2024-07-24 15:44:09.210594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:47.766 [2024-07-24 15:44:09.210609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:18:47.766 [2024-07-24 15:44:09.210621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.766 [2024-07-24 15:44:09.210715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.766 [2024-07-24 15:44:09.210730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:47.766 [2024-07-24 15:44:09.210745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:47.766 [2024-07-24 15:44:09.210756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.766 [2024-07-24 15:44:09.210879] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:47.766 [2024-07-24 15:44:09.210896] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:47.766 [2024-07-24 15:44:09.210911] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:47.766 [2024-07-24 15:44:09.210923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.766 [2024-07-24 15:44:09.210952] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:47.766 [2024-07-24 15:44:09.210967] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:47.766 [2024-07-24 15:44:09.210981] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:47.766 [2024-07-24 15:44:09.210999] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:47.766 [2024-07-24 15:44:09.211022] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:47.766 [2024-07-24 15:44:09.211036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:47.766 [2024-07-24 15:44:09.211050] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:47.766 [2024-07-24 15:44:09.211061] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:47.766 [2024-07-24 15:44:09.211076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:47.766 [2024-07-24 15:44:09.211109] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:47.766 [2024-07-24 15:44:09.211127] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:47.766 [2024-07-24 15:44:09.211138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.766 [2024-07-24 15:44:09.211153] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:47.766 [2024-07-24 15:44:09.211165] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:47.766 [2024-07-24 15:44:09.211178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.766 [2024-07-24 15:44:09.211190] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:47.766 [2024-07-24 15:44:09.211203] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:47.766 [2024-07-24 15:44:09.211215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:47.766 [2024-07-24 15:44:09.211230] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:47.766 [2024-07-24 15:44:09.211241] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:47.766 [2024-07-24 15:44:09.211254] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:47.766 [2024-07-24 15:44:09.211265] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:47.766 [2024-07-24 15:44:09.211278] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:47.766 [2024-07-24 15:44:09.211289] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:47.766 [2024-07-24 15:44:09.211302] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:47.766 [2024-07-24 15:44:09.211313] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:47.766 [2024-07-24 15:44:09.211326] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:47.766 [2024-07-24 15:44:09.211337] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:47.766 [2024-07-24 15:44:09.211352] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:47.766 [2024-07-24 15:44:09.211362] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:47.766 [2024-07-24 15:44:09.211375] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:47.766 [2024-07-24 15:44:09.211386] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:47.766 [2024-07-24 15:44:09.211399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:47.766 [2024-07-24 15:44:09.211410] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:47.766 [2024-07-24 15:44:09.211425] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:47.766 [2024-07-24 15:44:09.211436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:47.766 [2024-07-24 15:44:09.211449] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:47.766 [2024-07-24 15:44:09.211461] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:47.766 [2024-07-24 15:44:09.211474] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:47.766 [2024-07-24 15:44:09.211486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.766 [2024-07-24 15:44:09.211499] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:47.766 [2024-07-24 15:44:09.211511] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:47.766 [2024-07-24 15:44:09.211523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:47.766 [2024-07-24 15:44:09.211535] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:47.766 [2024-07-24 15:44:09.211550] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:47.766 [2024-07-24 15:44:09.211561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:47.766 [2024-07-24 15:44:09.211576] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:47.766 [2024-07-24 15:44:09.211591] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:47.766 [2024-07-24 15:44:09.211610] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:47.766 [2024-07-24 15:44:09.211622] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:47.766 [2024-07-24 15:44:09.211637] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:47.766 [2024-07-24 15:44:09.211650] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:47.766 [2024-07-24 15:44:09.211664] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:47.766 [2024-07-24 15:44:09.211676] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:47.766 [2024-07-24 15:44:09.211689] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:47.766 [2024-07-24 15:44:09.211701] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:47.766 [2024-07-24 15:44:09.211715] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:47.766 [2024-07-24 15:44:09.211727] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:47.767 [2024-07-24 15:44:09.211740] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:47.767 [2024-07-24 15:44:09.211753] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:47.767 [2024-07-24 15:44:09.211771] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:47.767 [2024-07-24 15:44:09.211783] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:47.767 [2024-07-24 15:44:09.211798] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:47.767 [2024-07-24 15:44:09.211811] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:47.767 [2024-07-24 15:44:09.211825] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:47.767 [2024-07-24 15:44:09.211838] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:47.767 [2024-07-24 15:44:09.211852] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:47.767 [2024-07-24 15:44:09.211865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.767 [2024-07-24 15:44:09.211881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:47.767 [2024-07-24 15:44:09.211894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.041 ms 00:18:47.767 [2024-07-24 15:44:09.211907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.767 [2024-07-24 15:44:09.232454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.767 [2024-07-24 15:44:09.232725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:47.767 [2024-07-24 15:44:09.232863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.446 ms 00:18:47.767 [2024-07-24 15:44:09.232923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.767 [2024-07-24 15:44:09.233233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.767 [2024-07-24 15:44:09.233402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:47.767 [2024-07-24 15:44:09.233535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:47.767 [2024-07-24 15:44:09.233595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.767 [2024-07-24 15:44:09.276432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.767 [2024-07-24 15:44:09.276701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:47.767 [2024-07-24 15:44:09.276862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.717 ms 00:18:47.767 [2024-07-24 15:44:09.276923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.767 [2024-07-24 15:44:09.277242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.767 [2024-07-24 15:44:09.277354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:47.767 [2024-07-24 15:44:09.277598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:47.767 [2024-07-24 15:44:09.277692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.767 [2024-07-24 15:44:09.278242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.767 [2024-07-24 15:44:09.278406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:47.767 [2024-07-24 15:44:09.278602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:18:47.767 [2024-07-24 15:44:09.278690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.767 [2024-07-24 15:44:09.278999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.767 [2024-07-24 15:44:09.279095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:47.767 [2024-07-24 15:44:09.279250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:18:47.767 [2024-07-24 15:44:09.279509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.767 [2024-07-24 15:44:09.304494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.767 [2024-07-24 15:44:09.304774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:47.767 [2024-07-24 15:44:09.304822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.801 ms 00:18:47.767 [2024-07-24 15:44:09.304848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.767 [2024-07-24 15:44:09.318966] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:47.767 [2024-07-24 15:44:09.333361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.767 [2024-07-24 15:44:09.333443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:47.767 [2024-07-24 15:44:09.333468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.220 ms 00:18:47.767 [2024-07-24 15:44:09.333482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.024 [2024-07-24 15:44:09.400798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.024 [2024-07-24 15:44:09.400893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:48.024 [2024-07-24 15:44:09.400919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.159 ms 00:18:48.024 [2024-07-24 15:44:09.400932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.024 [2024-07-24 15:44:09.401079] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:18:48.024 [2024-07-24 15:44:09.401128] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:18:50.549 [2024-07-24 15:44:11.571958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.549 [2024-07-24 15:44:11.572048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:50.549 [2024-07-24 15:44:11.572074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2170.881 ms 00:18:50.549 [2024-07-24 15:44:11.572106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.549 [2024-07-24 15:44:11.572494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.549 [2024-07-24 15:44:11.572525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:50.549 [2024-07-24 15:44:11.572543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:18:50.549 [2024-07-24 15:44:11.572558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.549 [2024-07-24 15:44:11.605779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.549 [2024-07-24 15:44:11.605860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:50.549 [2024-07-24 15:44:11.605886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.151 ms 00:18:50.549 [2024-07-24 15:44:11.605899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.549 [2024-07-24 15:44:11.639172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.549 [2024-07-24 15:44:11.639270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:50.549 [2024-07-24 15:44:11.639302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.094 ms 00:18:50.549 [2024-07-24 15:44:11.639316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.549 [2024-07-24 15:44:11.639871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.549 [2024-07-24 15:44:11.639905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:50.549 [2024-07-24 15:44:11.639932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:18:50.549 [2024-07-24 15:44:11.639950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.549 [2024-07-24 15:44:11.720861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.549 [2024-07-24 15:44:11.720944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:50.549 [2024-07-24 15:44:11.720969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.839 ms 00:18:50.549 [2024-07-24 15:44:11.720982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.549 [2024-07-24 15:44:11.754376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.549 [2024-07-24 15:44:11.754462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:50.549 [2024-07-24 15:44:11.754491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.211 ms 00:18:50.549 [2024-07-24 15:44:11.754505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.549 [2024-07-24 15:44:11.758665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.549 [2024-07-24 15:44:11.758722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:50.549 [2024-07-24 15:44:11.758747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.019 ms 00:18:50.549 [2024-07-24 15:44:11.758759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.549 [2024-07-24 15:44:11.796489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.549 [2024-07-24 15:44:11.796586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:50.549 [2024-07-24 15:44:11.796612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.614 ms 00:18:50.550 [2024-07-24 15:44:11.796625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.550 [2024-07-24 15:44:11.796789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.550 [2024-07-24 15:44:11.796809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:50.550 [2024-07-24 15:44:11.796825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:50.550 [2024-07-24 15:44:11.796838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.550 [2024-07-24 15:44:11.796953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.550 [2024-07-24 15:44:11.796972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:50.550 [2024-07-24 15:44:11.796987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:50.550 [2024-07-24 15:44:11.796999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.550 [2024-07-24 15:44:11.798099] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:50.550 [2024-07-24 15:44:11.802648] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2606.038 ms, result 0 00:18:50.550 [2024-07-24 15:44:11.803605] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:50.550 { 00:18:50.550 "name": "ftl0", 00:18:50.550 "uuid": "610a91da-4184-4e70-9b70-5012476db97f" 00:18:50.550 } 00:18:50.550 15:44:11 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:50.550 15:44:11 -- common/autotest_common.sh@887 -- # local bdev_name=ftl0 00:18:50.550 15:44:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:50.550 15:44:11 -- common/autotest_common.sh@889 -- # local i 00:18:50.550 15:44:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:50.550 15:44:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:50.550 15:44:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:50.550 15:44:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:50.807 [ 00:18:50.807 { 00:18:50.807 "name": "ftl0", 00:18:50.807 "aliases": [ 00:18:50.807 "610a91da-4184-4e70-9b70-5012476db97f" 00:18:50.807 ], 00:18:50.807 "product_name": "FTL disk", 00:18:50.807 "block_size": 4096, 00:18:50.807 "num_blocks": 23592960, 00:18:50.807 "uuid": "610a91da-4184-4e70-9b70-5012476db97f", 00:18:50.807 "assigned_rate_limits": { 00:18:50.807 "rw_ios_per_sec": 0, 00:18:50.807 "rw_mbytes_per_sec": 0, 00:18:50.807 "r_mbytes_per_sec": 0, 00:18:50.807 "w_mbytes_per_sec": 0 00:18:50.807 }, 00:18:50.807 "claimed": false, 00:18:50.807 "zoned": false, 00:18:50.807 "supported_io_types": { 00:18:50.807 "read": true, 00:18:50.807 "write": true, 00:18:50.807 "unmap": true, 00:18:50.807 "write_zeroes": true, 00:18:50.808 "flush": true, 00:18:50.808 "reset": false, 00:18:50.808 "compare": false, 00:18:50.808 "compare_and_write": false, 00:18:50.808 "abort": false, 00:18:50.808 "nvme_admin": false, 00:18:50.808 "nvme_io": false 00:18:50.808 }, 00:18:50.808 "driver_specific": { 00:18:50.808 "ftl": { 00:18:50.808 "base_bdev": "2392fd2c-09a0-4e68-89cd-fc94295a11ef", 00:18:50.808 "cache": "nvc0n1p0" 00:18:50.808 } 00:18:50.808 } 00:18:50.808 } 00:18:50.808 ] 00:18:50.808 15:44:12 -- common/autotest_common.sh@895 -- # return 0 00:18:50.808 15:44:12 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:50.808 15:44:12 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:51.065 15:44:12 -- ftl/trim.sh@56 -- # echo ']}' 00:18:51.065 15:44:12 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:51.630 15:44:12 -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:51.630 { 00:18:51.630 "name": "ftl0", 00:18:51.630 "aliases": [ 00:18:51.630 "610a91da-4184-4e70-9b70-5012476db97f" 00:18:51.630 ], 00:18:51.630 "product_name": "FTL disk", 00:18:51.630 "block_size": 4096, 00:18:51.630 "num_blocks": 23592960, 00:18:51.630 "uuid": "610a91da-4184-4e70-9b70-5012476db97f", 00:18:51.630 "assigned_rate_limits": { 00:18:51.630 "rw_ios_per_sec": 0, 00:18:51.630 "rw_mbytes_per_sec": 0, 00:18:51.630 "r_mbytes_per_sec": 0, 00:18:51.630 "w_mbytes_per_sec": 0 00:18:51.630 }, 00:18:51.630 "claimed": false, 00:18:51.630 "zoned": false, 00:18:51.630 "supported_io_types": { 00:18:51.630 "read": true, 00:18:51.630 "write": true, 00:18:51.630 "unmap": true, 00:18:51.630 "write_zeroes": true, 00:18:51.630 "flush": true, 00:18:51.630 "reset": false, 00:18:51.630 "compare": false, 00:18:51.630 "compare_and_write": false, 00:18:51.630 "abort": false, 00:18:51.630 "nvme_admin": false, 00:18:51.630 "nvme_io": false 00:18:51.630 }, 00:18:51.630 "driver_specific": { 00:18:51.630 "ftl": { 00:18:51.630 "base_bdev": "2392fd2c-09a0-4e68-89cd-fc94295a11ef", 00:18:51.630 "cache": "nvc0n1p0" 00:18:51.630 } 00:18:51.630 } 00:18:51.630 } 00:18:51.630 ]' 00:18:51.630 15:44:12 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:51.630 15:44:12 -- ftl/trim.sh@60 -- # nb=23592960 00:18:51.630 15:44:12 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:51.630 [2024-07-24 15:44:13.205198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.630 [2024-07-24 15:44:13.205272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:51.630 [2024-07-24 15:44:13.205295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:51.630 [2024-07-24 15:44:13.205309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.630 [2024-07-24 15:44:13.205357] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:51.630 [2024-07-24 15:44:13.208758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.630 [2024-07-24 15:44:13.208818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:51.630 [2024-07-24 15:44:13.208839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.367 ms 00:18:51.630 [2024-07-24 15:44:13.208852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.630 [2024-07-24 15:44:13.209595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.630 [2024-07-24 15:44:13.209631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:51.630 [2024-07-24 15:44:13.209656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:18:51.630 [2024-07-24 15:44:13.209669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.630 [2024-07-24 15:44:13.213483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.630 [2024-07-24 15:44:13.213539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:51.630 [2024-07-24 15:44:13.213564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.763 ms 00:18:51.630 [2024-07-24 15:44:13.213577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.630 [2024-07-24 15:44:13.221209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.630 [2024-07-24 15:44:13.221287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:51.630 [2024-07-24 15:44:13.221310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.521 ms 00:18:51.630 [2024-07-24 15:44:13.221322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.889 [2024-07-24 15:44:13.254460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.889 [2024-07-24 15:44:13.254539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:51.889 [2024-07-24 15:44:13.254563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.948 ms 00:18:51.889 [2024-07-24 15:44:13.254576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.889 [2024-07-24 15:44:13.274220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.889 [2024-07-24 15:44:13.274309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:51.890 [2024-07-24 15:44:13.274336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.454 ms 00:18:51.890 [2024-07-24 15:44:13.274349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.890 [2024-07-24 15:44:13.274692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.890 [2024-07-24 15:44:13.274721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:51.890 [2024-07-24 15:44:13.274742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:18:51.890 [2024-07-24 15:44:13.274758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.890 [2024-07-24 15:44:13.308047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.890 [2024-07-24 15:44:13.308151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:51.890 [2024-07-24 15:44:13.308178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.233 ms 00:18:51.890 [2024-07-24 15:44:13.308190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.890 [2024-07-24 15:44:13.340805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.890 [2024-07-24 15:44:13.340884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:51.890 [2024-07-24 15:44:13.340909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.449 ms 00:18:51.890 [2024-07-24 15:44:13.340922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.890 [2024-07-24 15:44:13.373484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.890 [2024-07-24 15:44:13.373553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:51.890 [2024-07-24 15:44:13.373576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.399 ms 00:18:51.890 [2024-07-24 15:44:13.373589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.890 [2024-07-24 15:44:13.415540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.890 [2024-07-24 15:44:13.415659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:51.890 [2024-07-24 15:44:13.415702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.685 ms 00:18:51.890 [2024-07-24 15:44:13.415728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.890 [2024-07-24 15:44:13.415962] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:51.890 [2024-07-24 15:44:13.416008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.416989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.417003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.417015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.417029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.417041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.417054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.417066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.417080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.417109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.417125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.417138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.417153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.417166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.417179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.417191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.417205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.417217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.417233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:51.890 [2024-07-24 15:44:13.417245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:51.891 [2024-07-24 15:44:13.417699] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:51.891 [2024-07-24 15:44:13.417713] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 610a91da-4184-4e70-9b70-5012476db97f 00:18:51.891 [2024-07-24 15:44:13.417725] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:51.891 [2024-07-24 15:44:13.417739] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:51.891 [2024-07-24 15:44:13.417750] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:51.891 [2024-07-24 15:44:13.417765] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:51.891 [2024-07-24 15:44:13.417776] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:51.891 [2024-07-24 15:44:13.417791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:51.891 [2024-07-24 15:44:13.417802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:51.891 [2024-07-24 15:44:13.417817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:51.891 [2024-07-24 15:44:13.417827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:51.891 [2024-07-24 15:44:13.417842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.891 [2024-07-24 15:44:13.417855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:51.891 [2024-07-24 15:44:13.417871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.890 ms 00:18:51.891 [2024-07-24 15:44:13.417886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.891 [2024-07-24 15:44:13.437398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.891 [2024-07-24 15:44:13.437474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:51.891 [2024-07-24 15:44:13.437500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.441 ms 00:18:51.891 [2024-07-24 15:44:13.437512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.891 [2024-07-24 15:44:13.437852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.891 [2024-07-24 15:44:13.437876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:51.891 [2024-07-24 15:44:13.437891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:18:51.891 [2024-07-24 15:44:13.437903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.149 [2024-07-24 15:44:13.502133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.149 [2024-07-24 15:44:13.502233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:52.149 [2024-07-24 15:44:13.502272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.149 [2024-07-24 15:44:13.502291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.149 [2024-07-24 15:44:13.502513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.149 [2024-07-24 15:44:13.502544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:52.149 [2024-07-24 15:44:13.502565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.149 [2024-07-24 15:44:13.502588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.149 [2024-07-24 15:44:13.502704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.149 [2024-07-24 15:44:13.502725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:52.150 [2024-07-24 15:44:13.502742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.150 [2024-07-24 15:44:13.502763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.150 [2024-07-24 15:44:13.502824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.150 [2024-07-24 15:44:13.502842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:52.150 [2024-07-24 15:44:13.502857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.150 [2024-07-24 15:44:13.502871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.150 [2024-07-24 15:44:13.627696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.150 [2024-07-24 15:44:13.627785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:52.150 [2024-07-24 15:44:13.627814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.150 [2024-07-24 15:44:13.627827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.150 [2024-07-24 15:44:13.669050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.150 [2024-07-24 15:44:13.669171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:52.150 [2024-07-24 15:44:13.669218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.150 [2024-07-24 15:44:13.669239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.150 [2024-07-24 15:44:13.669431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.150 [2024-07-24 15:44:13.669464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:52.150 [2024-07-24 15:44:13.669493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.150 [2024-07-24 15:44:13.669516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.150 [2024-07-24 15:44:13.669612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.150 [2024-07-24 15:44:13.669638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:52.150 [2024-07-24 15:44:13.669655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.150 [2024-07-24 15:44:13.669668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.150 [2024-07-24 15:44:13.669832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.150 [2024-07-24 15:44:13.669852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:52.150 [2024-07-24 15:44:13.669871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.150 [2024-07-24 15:44:13.669883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.150 [2024-07-24 15:44:13.669980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.150 [2024-07-24 15:44:13.669998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:52.150 [2024-07-24 15:44:13.670014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.150 [2024-07-24 15:44:13.670026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.150 [2024-07-24 15:44:13.670119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.150 [2024-07-24 15:44:13.670138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:52.150 [2024-07-24 15:44:13.670154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.150 [2024-07-24 15:44:13.670166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.150 [2024-07-24 15:44:13.670245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.150 [2024-07-24 15:44:13.670261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:52.150 [2024-07-24 15:44:13.670276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.150 [2024-07-24 15:44:13.670287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.150 [2024-07-24 15:44:13.670512] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 465.289 ms, result 0 00:18:52.150 true 00:18:52.150 15:44:13 -- ftl/trim.sh@63 -- # killprocess 72841 00:18:52.150 15:44:13 -- common/autotest_common.sh@926 -- # '[' -z 72841 ']' 00:18:52.150 15:44:13 -- common/autotest_common.sh@930 -- # kill -0 72841 00:18:52.150 15:44:13 -- common/autotest_common.sh@931 -- # uname 00:18:52.150 15:44:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:52.150 15:44:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72841 00:18:52.150 killing process with pid 72841 00:18:52.150 15:44:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:52.150 15:44:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:52.150 15:44:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72841' 00:18:52.150 15:44:13 -- common/autotest_common.sh@945 -- # kill 72841 00:18:52.150 15:44:13 -- common/autotest_common.sh@950 -- # wait 72841 00:18:57.409 15:44:18 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:58.342 65536+0 records in 00:18:58.342 65536+0 records out 00:18:58.342 268435456 bytes (268 MB, 256 MiB) copied, 1.25807 s, 213 MB/s 00:18:58.342 15:44:19 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:58.342 [2024-07-24 15:44:19.730379] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:58.342 [2024-07-24 15:44:19.730583] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73061 ] 00:18:58.343 [2024-07-24 15:44:19.918987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.601 [2024-07-24 15:44:20.144767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.169 [2024-07-24 15:44:20.492975] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:59.169 [2024-07-24 15:44:20.493082] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:59.169 [2024-07-24 15:44:20.649541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.169 [2024-07-24 15:44:20.649622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:59.169 [2024-07-24 15:44:20.649644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:59.169 [2024-07-24 15:44:20.649662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.169 [2024-07-24 15:44:20.653181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.169 [2024-07-24 15:44:20.653235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:59.169 [2024-07-24 15:44:20.653253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.487 ms 00:18:59.169 [2024-07-24 15:44:20.653270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.169 [2024-07-24 15:44:20.653417] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:59.169 [2024-07-24 15:44:20.654401] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:59.169 [2024-07-24 15:44:20.654439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.169 [2024-07-24 15:44:20.654459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:59.169 [2024-07-24 15:44:20.654471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.034 ms 00:18:59.169 [2024-07-24 15:44:20.654483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.169 [2024-07-24 15:44:20.655853] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:59.169 [2024-07-24 15:44:20.672908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.169 [2024-07-24 15:44:20.673021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:59.169 [2024-07-24 15:44:20.673044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.047 ms 00:18:59.169 [2024-07-24 15:44:20.673056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.169 [2024-07-24 15:44:20.673342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.169 [2024-07-24 15:44:20.673367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:59.169 [2024-07-24 15:44:20.673386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:59.169 [2024-07-24 15:44:20.673398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.169 [2024-07-24 15:44:20.678701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.169 [2024-07-24 15:44:20.678776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:59.169 [2024-07-24 15:44:20.678795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.228 ms 00:18:59.169 [2024-07-24 15:44:20.678807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.169 [2024-07-24 15:44:20.679020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.169 [2024-07-24 15:44:20.679049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:59.169 [2024-07-24 15:44:20.679063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:18:59.169 [2024-07-24 15:44:20.679074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.169 [2024-07-24 15:44:20.679153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.169 [2024-07-24 15:44:20.679173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:59.169 [2024-07-24 15:44:20.679186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:59.169 [2024-07-24 15:44:20.679197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.169 [2024-07-24 15:44:20.679237] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:59.169 [2024-07-24 15:44:20.683575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.169 [2024-07-24 15:44:20.683630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:59.169 [2024-07-24 15:44:20.683647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.354 ms 00:18:59.169 [2024-07-24 15:44:20.683658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.169 [2024-07-24 15:44:20.683778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.169 [2024-07-24 15:44:20.683804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:59.169 [2024-07-24 15:44:20.683818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:59.169 [2024-07-24 15:44:20.683829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.169 [2024-07-24 15:44:20.683863] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:59.169 [2024-07-24 15:44:20.683892] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:59.169 [2024-07-24 15:44:20.683936] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:59.169 [2024-07-24 15:44:20.683956] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:59.169 [2024-07-24 15:44:20.684043] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:59.169 [2024-07-24 15:44:20.684059] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:59.169 [2024-07-24 15:44:20.684074] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:59.169 [2024-07-24 15:44:20.684112] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:59.169 [2024-07-24 15:44:20.684129] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:59.169 [2024-07-24 15:44:20.684141] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:59.169 [2024-07-24 15:44:20.684153] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:59.169 [2024-07-24 15:44:20.684173] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:59.169 [2024-07-24 15:44:20.684185] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:59.169 [2024-07-24 15:44:20.684196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.169 [2024-07-24 15:44:20.684212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:59.169 [2024-07-24 15:44:20.684224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:18:59.169 [2024-07-24 15:44:20.684235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.169 [2024-07-24 15:44:20.684320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.169 [2024-07-24 15:44:20.684336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:59.169 [2024-07-24 15:44:20.684348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:59.169 [2024-07-24 15:44:20.684358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.169 [2024-07-24 15:44:20.684448] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:59.169 [2024-07-24 15:44:20.684464] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:59.169 [2024-07-24 15:44:20.684476] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:59.169 [2024-07-24 15:44:20.684492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.169 [2024-07-24 15:44:20.684504] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:59.169 [2024-07-24 15:44:20.684515] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:59.169 [2024-07-24 15:44:20.684526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:59.169 [2024-07-24 15:44:20.684536] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:59.169 [2024-07-24 15:44:20.684546] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:59.169 [2024-07-24 15:44:20.684556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:59.169 [2024-07-24 15:44:20.684567] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:59.169 [2024-07-24 15:44:20.684577] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:59.169 [2024-07-24 15:44:20.684587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:59.169 [2024-07-24 15:44:20.684597] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:59.169 [2024-07-24 15:44:20.684607] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:59.169 [2024-07-24 15:44:20.684617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.169 [2024-07-24 15:44:20.684627] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:59.169 [2024-07-24 15:44:20.684639] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:59.169 [2024-07-24 15:44:20.684649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.169 [2024-07-24 15:44:20.684675] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:59.169 [2024-07-24 15:44:20.684697] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:59.169 [2024-07-24 15:44:20.684707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:59.169 [2024-07-24 15:44:20.684718] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:59.169 [2024-07-24 15:44:20.684728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:59.169 [2024-07-24 15:44:20.684738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:59.169 [2024-07-24 15:44:20.684748] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:59.169 [2024-07-24 15:44:20.684758] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:59.169 [2024-07-24 15:44:20.684768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:59.169 [2024-07-24 15:44:20.684778] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:59.169 [2024-07-24 15:44:20.684788] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:59.169 [2024-07-24 15:44:20.684797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:59.169 [2024-07-24 15:44:20.684808] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:59.169 [2024-07-24 15:44:20.684818] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:59.169 [2024-07-24 15:44:20.684827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:59.169 [2024-07-24 15:44:20.684837] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:59.169 [2024-07-24 15:44:20.684847] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:59.169 [2024-07-24 15:44:20.684857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:59.169 [2024-07-24 15:44:20.684867] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:59.169 [2024-07-24 15:44:20.684877] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:59.169 [2024-07-24 15:44:20.684887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:59.169 [2024-07-24 15:44:20.684896] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:59.169 [2024-07-24 15:44:20.684908] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:59.169 [2024-07-24 15:44:20.684918] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:59.169 [2024-07-24 15:44:20.684929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.169 [2024-07-24 15:44:20.684940] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:59.169 [2024-07-24 15:44:20.684950] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:59.169 [2024-07-24 15:44:20.684960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:59.169 [2024-07-24 15:44:20.684971] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:59.169 [2024-07-24 15:44:20.684980] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:59.169 [2024-07-24 15:44:20.684992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:59.169 [2024-07-24 15:44:20.685003] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:59.169 [2024-07-24 15:44:20.685022] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:59.169 [2024-07-24 15:44:20.685036] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:59.169 [2024-07-24 15:44:20.685047] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:59.169 [2024-07-24 15:44:20.685058] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:59.169 [2024-07-24 15:44:20.685070] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:59.169 [2024-07-24 15:44:20.685081] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:59.169 [2024-07-24 15:44:20.685109] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:59.169 [2024-07-24 15:44:20.685122] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:59.169 [2024-07-24 15:44:20.685133] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:59.169 [2024-07-24 15:44:20.685145] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:59.169 [2024-07-24 15:44:20.685156] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:59.169 [2024-07-24 15:44:20.685167] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:59.169 [2024-07-24 15:44:20.685179] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:59.169 [2024-07-24 15:44:20.685190] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:59.169 [2024-07-24 15:44:20.685201] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:59.169 [2024-07-24 15:44:20.685214] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:59.169 [2024-07-24 15:44:20.685227] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:59.169 [2024-07-24 15:44:20.685239] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:59.169 [2024-07-24 15:44:20.685250] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:59.169 [2024-07-24 15:44:20.685262] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:59.169 [2024-07-24 15:44:20.685274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.169 [2024-07-24 15:44:20.685293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:59.169 [2024-07-24 15:44:20.685305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:18:59.169 [2024-07-24 15:44:20.685316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.169 [2024-07-24 15:44:20.703707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.169 [2024-07-24 15:44:20.703784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:59.169 [2024-07-24 15:44:20.703807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.323 ms 00:18:59.169 [2024-07-24 15:44:20.703819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.169 [2024-07-24 15:44:20.704015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.169 [2024-07-24 15:44:20.704035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:59.169 [2024-07-24 15:44:20.704048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:59.169 [2024-07-24 15:44:20.704059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.170 [2024-07-24 15:44:20.753649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.170 [2024-07-24 15:44:20.753736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:59.170 [2024-07-24 15:44:20.753758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.533 ms 00:18:59.170 [2024-07-24 15:44:20.753770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.170 [2024-07-24 15:44:20.753930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.170 [2024-07-24 15:44:20.753951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:59.170 [2024-07-24 15:44:20.753964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:59.170 [2024-07-24 15:44:20.753976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.170 [2024-07-24 15:44:20.754395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.170 [2024-07-24 15:44:20.754417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:59.170 [2024-07-24 15:44:20.754430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:18:59.170 [2024-07-24 15:44:20.754441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.170 [2024-07-24 15:44:20.754608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.170 [2024-07-24 15:44:20.754635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:59.170 [2024-07-24 15:44:20.754649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:18:59.170 [2024-07-24 15:44:20.754660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.428 [2024-07-24 15:44:20.773941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.428 [2024-07-24 15:44:20.774024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:59.428 [2024-07-24 15:44:20.774046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.247 ms 00:18:59.428 [2024-07-24 15:44:20.774058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.428 [2024-07-24 15:44:20.791613] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:59.428 [2024-07-24 15:44:20.791725] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:59.428 [2024-07-24 15:44:20.791762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.428 [2024-07-24 15:44:20.791776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:59.428 [2024-07-24 15:44:20.791793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.463 ms 00:18:59.428 [2024-07-24 15:44:20.791804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.428 [2024-07-24 15:44:20.822955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.428 [2024-07-24 15:44:20.823060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:59.428 [2024-07-24 15:44:20.823082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.954 ms 00:18:59.428 [2024-07-24 15:44:20.823127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.428 [2024-07-24 15:44:20.840310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.428 [2024-07-24 15:44:20.840407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:59.428 [2024-07-24 15:44:20.840429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.928 ms 00:18:59.428 [2024-07-24 15:44:20.840442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.428 [2024-07-24 15:44:20.857414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.428 [2024-07-24 15:44:20.857516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:59.428 [2024-07-24 15:44:20.857582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.771 ms 00:18:59.428 [2024-07-24 15:44:20.857594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.428 [2024-07-24 15:44:20.858277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.428 [2024-07-24 15:44:20.858306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:59.428 [2024-07-24 15:44:20.858322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:18:59.428 [2024-07-24 15:44:20.858334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.428 [2024-07-24 15:44:20.947265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.428 [2024-07-24 15:44:20.947356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:59.428 [2024-07-24 15:44:20.947380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.891 ms 00:18:59.428 [2024-07-24 15:44:20.947392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.428 [2024-07-24 15:44:20.960819] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:59.428 [2024-07-24 15:44:20.976034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.428 [2024-07-24 15:44:20.976164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:59.428 [2024-07-24 15:44:20.976188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.442 ms 00:18:59.428 [2024-07-24 15:44:20.976201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.428 [2024-07-24 15:44:20.976371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.428 [2024-07-24 15:44:20.976393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:59.428 [2024-07-24 15:44:20.976406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:59.428 [2024-07-24 15:44:20.976417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.428 [2024-07-24 15:44:20.976500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.429 [2024-07-24 15:44:20.976518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:59.429 [2024-07-24 15:44:20.976531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:59.429 [2024-07-24 15:44:20.976552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.429 [2024-07-24 15:44:20.979595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.429 [2024-07-24 15:44:20.979676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:59.429 [2024-07-24 15:44:20.979696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.003 ms 00:18:59.429 [2024-07-24 15:44:20.979707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.429 [2024-07-24 15:44:20.979776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.429 [2024-07-24 15:44:20.979792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:59.429 [2024-07-24 15:44:20.979804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:59.429 [2024-07-24 15:44:20.979816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.429 [2024-07-24 15:44:20.979887] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:59.429 [2024-07-24 15:44:20.979905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.429 [2024-07-24 15:44:20.979916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:59.429 [2024-07-24 15:44:20.979928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:59.429 [2024-07-24 15:44:20.979939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.429 [2024-07-24 15:44:21.013141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.429 [2024-07-24 15:44:21.013225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:59.429 [2024-07-24 15:44:21.013248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.168 ms 00:18:59.429 [2024-07-24 15:44:21.013288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.429 [2024-07-24 15:44:21.013507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.429 [2024-07-24 15:44:21.013530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:59.429 [2024-07-24 15:44:21.013544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:59.429 [2024-07-24 15:44:21.013556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.429 [2024-07-24 15:44:21.014887] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:59.429 [2024-07-24 15:44:21.019417] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 364.933 ms, result 0 00:18:59.429 [2024-07-24 15:44:21.020270] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:59.687 [2024-07-24 15:44:21.037859] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:09.465  Copying: 26/256 [MB] (26 MBps) Copying: 53/256 [MB] (26 MBps) Copying: 80/256 [MB] (26 MBps) Copying: 105/256 [MB] (25 MBps) Copying: 130/256 [MB] (25 MBps) Copying: 157/256 [MB] (26 MBps) Copying: 182/256 [MB] (25 MBps) Copying: 208/256 [MB] (25 MBps) Copying: 235/256 [MB] (27 MBps) Copying: 256/256 [MB] (average 26 MBps)[2024-07-24 15:44:30.773947] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:09.465 [2024-07-24 15:44:30.786169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.465 [2024-07-24 15:44:30.786212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:09.465 [2024-07-24 15:44:30.786232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:09.465 [2024-07-24 15:44:30.786244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.465 [2024-07-24 15:44:30.786293] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:09.465 [2024-07-24 15:44:30.789612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.465 [2024-07-24 15:44:30.789646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:09.465 [2024-07-24 15:44:30.789661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.298 ms 00:19:09.465 [2024-07-24 15:44:30.789672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.465 [2024-07-24 15:44:30.791212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.465 [2024-07-24 15:44:30.791254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:09.465 [2024-07-24 15:44:30.791271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.510 ms 00:19:09.465 [2024-07-24 15:44:30.791282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.465 [2024-07-24 15:44:30.797978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.465 [2024-07-24 15:44:30.798020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:09.465 [2024-07-24 15:44:30.798051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.671 ms 00:19:09.465 [2024-07-24 15:44:30.798062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.465 [2024-07-24 15:44:30.805654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.465 [2024-07-24 15:44:30.805691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:09.465 [2024-07-24 15:44:30.805706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.495 ms 00:19:09.465 [2024-07-24 15:44:30.805717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.465 [2024-07-24 15:44:30.836879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.465 [2024-07-24 15:44:30.836947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:09.465 [2024-07-24 15:44:30.836966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.100 ms 00:19:09.465 [2024-07-24 15:44:30.836978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.465 [2024-07-24 15:44:30.854999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.465 [2024-07-24 15:44:30.855049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:09.465 [2024-07-24 15:44:30.855067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.880 ms 00:19:09.465 [2024-07-24 15:44:30.855108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.465 [2024-07-24 15:44:30.855305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.465 [2024-07-24 15:44:30.855328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:09.465 [2024-07-24 15:44:30.855341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:19:09.465 [2024-07-24 15:44:30.855352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.465 [2024-07-24 15:44:30.886680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.465 [2024-07-24 15:44:30.886726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:09.465 [2024-07-24 15:44:30.886743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.293 ms 00:19:09.465 [2024-07-24 15:44:30.886777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.465 [2024-07-24 15:44:30.918298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.465 [2024-07-24 15:44:30.918354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:09.466 [2024-07-24 15:44:30.918374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.433 ms 00:19:09.466 [2024-07-24 15:44:30.918385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.466 [2024-07-24 15:44:30.949689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.466 [2024-07-24 15:44:30.949767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:09.466 [2024-07-24 15:44:30.949787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.204 ms 00:19:09.466 [2024-07-24 15:44:30.949799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.466 [2024-07-24 15:44:30.981221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.466 [2024-07-24 15:44:30.981267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:09.466 [2024-07-24 15:44:30.981284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.224 ms 00:19:09.466 [2024-07-24 15:44:30.981295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.466 [2024-07-24 15:44:30.981380] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:09.466 [2024-07-24 15:44:30.981421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.981995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:09.466 [2024-07-24 15:44:30.982357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:09.467 [2024-07-24 15:44:30.982619] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:09.467 [2024-07-24 15:44:30.982631] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 610a91da-4184-4e70-9b70-5012476db97f 00:19:09.467 [2024-07-24 15:44:30.982663] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:09.467 [2024-07-24 15:44:30.982675] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:09.467 [2024-07-24 15:44:30.982686] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:09.467 [2024-07-24 15:44:30.982697] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:09.467 [2024-07-24 15:44:30.982707] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:09.467 [2024-07-24 15:44:30.982719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:09.467 [2024-07-24 15:44:30.982730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:09.467 [2024-07-24 15:44:30.982740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:09.467 [2024-07-24 15:44:30.982750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:09.467 [2024-07-24 15:44:30.982761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.467 [2024-07-24 15:44:30.982772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:09.467 [2024-07-24 15:44:30.982794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.384 ms 00:19:09.467 [2024-07-24 15:44:30.982805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.467 [2024-07-24 15:44:30.999284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.467 [2024-07-24 15:44:30.999328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:09.467 [2024-07-24 15:44:30.999345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.452 ms 00:19:09.467 [2024-07-24 15:44:30.999357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.467 [2024-07-24 15:44:30.999659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.467 [2024-07-24 15:44:30.999683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:09.467 [2024-07-24 15:44:30.999697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:19:09.467 [2024-07-24 15:44:30.999708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.467 [2024-07-24 15:44:31.048930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.467 [2024-07-24 15:44:31.049010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:09.467 [2024-07-24 15:44:31.049030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.467 [2024-07-24 15:44:31.049041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.467 [2024-07-24 15:44:31.049234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.467 [2024-07-24 15:44:31.049256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:09.467 [2024-07-24 15:44:31.049269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.467 [2024-07-24 15:44:31.049280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.467 [2024-07-24 15:44:31.049355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.467 [2024-07-24 15:44:31.049385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:09.467 [2024-07-24 15:44:31.049397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.467 [2024-07-24 15:44:31.049409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.467 [2024-07-24 15:44:31.049434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.467 [2024-07-24 15:44:31.049447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:09.467 [2024-07-24 15:44:31.049472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.467 [2024-07-24 15:44:31.049483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.725 [2024-07-24 15:44:31.150728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.725 [2024-07-24 15:44:31.150808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:09.725 [2024-07-24 15:44:31.150828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.725 [2024-07-24 15:44:31.150839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.725 [2024-07-24 15:44:31.191063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.725 [2024-07-24 15:44:31.191194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:09.725 [2024-07-24 15:44:31.191216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.725 [2024-07-24 15:44:31.191228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.725 [2024-07-24 15:44:31.191348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.725 [2024-07-24 15:44:31.191366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:09.725 [2024-07-24 15:44:31.191379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.725 [2024-07-24 15:44:31.191390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.725 [2024-07-24 15:44:31.191428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.725 [2024-07-24 15:44:31.191443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:09.725 [2024-07-24 15:44:31.191454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.725 [2024-07-24 15:44:31.191480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.725 [2024-07-24 15:44:31.191616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.725 [2024-07-24 15:44:31.191642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:09.725 [2024-07-24 15:44:31.191655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.725 [2024-07-24 15:44:31.191666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.725 [2024-07-24 15:44:31.191723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.725 [2024-07-24 15:44:31.191741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:09.725 [2024-07-24 15:44:31.191753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.725 [2024-07-24 15:44:31.191778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.725 [2024-07-24 15:44:31.191829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.725 [2024-07-24 15:44:31.191844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:09.725 [2024-07-24 15:44:31.191856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.726 [2024-07-24 15:44:31.191867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.726 [2024-07-24 15:44:31.191925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.726 [2024-07-24 15:44:31.191941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:09.726 [2024-07-24 15:44:31.191953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.726 [2024-07-24 15:44:31.191977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.726 [2024-07-24 15:44:31.192198] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 406.038 ms, result 0 00:19:11.097 00:19:11.097 00:19:11.097 15:44:32 -- ftl/trim.sh@72 -- # svcpid=73191 00:19:11.097 15:44:32 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:11.097 15:44:32 -- ftl/trim.sh@73 -- # waitforlisten 73191 00:19:11.097 15:44:32 -- common/autotest_common.sh@819 -- # '[' -z 73191 ']' 00:19:11.097 15:44:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.097 15:44:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:11.097 15:44:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.097 15:44:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:11.097 15:44:32 -- common/autotest_common.sh@10 -- # set +x 00:19:11.097 [2024-07-24 15:44:32.579713] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:11.097 [2024-07-24 15:44:32.579872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73191 ] 00:19:11.355 [2024-07-24 15:44:32.747281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.355 [2024-07-24 15:44:32.928895] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:11.355 [2024-07-24 15:44:32.929148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.726 15:44:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:12.726 15:44:34 -- common/autotest_common.sh@852 -- # return 0 00:19:12.726 15:44:34 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:12.983 [2024-07-24 15:44:34.430852] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:12.983 [2024-07-24 15:44:34.430935] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:13.242 [2024-07-24 15:44:34.610779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.242 [2024-07-24 15:44:34.610853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:13.242 [2024-07-24 15:44:34.610879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:13.242 [2024-07-24 15:44:34.610892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.242 [2024-07-24 15:44:34.615164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.242 [2024-07-24 15:44:34.615222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:13.242 [2024-07-24 15:44:34.615247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.240 ms 00:19:13.242 [2024-07-24 15:44:34.615261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.243 [2024-07-24 15:44:34.615436] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:13.243 [2024-07-24 15:44:34.616435] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:13.243 [2024-07-24 15:44:34.616483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.243 [2024-07-24 15:44:34.616500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:13.243 [2024-07-24 15:44:34.616519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:19:13.243 [2024-07-24 15:44:34.616532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.243 [2024-07-24 15:44:34.617830] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:13.243 [2024-07-24 15:44:34.634683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.243 [2024-07-24 15:44:34.634772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:13.243 [2024-07-24 15:44:34.634794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.865 ms 00:19:13.243 [2024-07-24 15:44:34.634812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.243 [2024-07-24 15:44:34.634989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.243 [2024-07-24 15:44:34.635019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:13.243 [2024-07-24 15:44:34.635035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:13.243 [2024-07-24 15:44:34.635052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.243 [2024-07-24 15:44:34.639848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.243 [2024-07-24 15:44:34.639925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:13.243 [2024-07-24 15:44:34.639947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.680 ms 00:19:13.243 [2024-07-24 15:44:34.639970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.243 [2024-07-24 15:44:34.640175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.243 [2024-07-24 15:44:34.640206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:13.243 [2024-07-24 15:44:34.640222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:19:13.243 [2024-07-24 15:44:34.640239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.243 [2024-07-24 15:44:34.640281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.243 [2024-07-24 15:44:34.640312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:13.243 [2024-07-24 15:44:34.640326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:13.243 [2024-07-24 15:44:34.640343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.243 [2024-07-24 15:44:34.640387] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:13.243 [2024-07-24 15:44:34.644748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.243 [2024-07-24 15:44:34.644806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:13.243 [2024-07-24 15:44:34.644830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.369 ms 00:19:13.243 [2024-07-24 15:44:34.644844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.243 [2024-07-24 15:44:34.644974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.243 [2024-07-24 15:44:34.644995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:13.243 [2024-07-24 15:44:34.645014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:13.243 [2024-07-24 15:44:34.645027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.243 [2024-07-24 15:44:34.645067] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:13.243 [2024-07-24 15:44:34.645126] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:13.243 [2024-07-24 15:44:34.645182] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:13.243 [2024-07-24 15:44:34.645206] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:13.243 [2024-07-24 15:44:34.645306] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:13.243 [2024-07-24 15:44:34.645324] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:13.243 [2024-07-24 15:44:34.645344] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:13.243 [2024-07-24 15:44:34.645361] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:13.243 [2024-07-24 15:44:34.645387] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:13.243 [2024-07-24 15:44:34.645401] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:13.243 [2024-07-24 15:44:34.645418] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:13.243 [2024-07-24 15:44:34.645430] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:13.243 [2024-07-24 15:44:34.645450] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:13.243 [2024-07-24 15:44:34.645464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.243 [2024-07-24 15:44:34.645480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:13.243 [2024-07-24 15:44:34.645494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:19:13.243 [2024-07-24 15:44:34.645511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.243 [2024-07-24 15:44:34.645596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.243 [2024-07-24 15:44:34.645617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:13.243 [2024-07-24 15:44:34.645635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:13.243 [2024-07-24 15:44:34.645652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.243 [2024-07-24 15:44:34.645745] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:13.243 [2024-07-24 15:44:34.645769] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:13.243 [2024-07-24 15:44:34.645783] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:13.243 [2024-07-24 15:44:34.645800] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.243 [2024-07-24 15:44:34.645813] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:13.243 [2024-07-24 15:44:34.645831] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:13.243 [2024-07-24 15:44:34.645843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:13.243 [2024-07-24 15:44:34.645864] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:13.243 [2024-07-24 15:44:34.645886] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:13.243 [2024-07-24 15:44:34.645902] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:13.243 [2024-07-24 15:44:34.645914] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:13.243 [2024-07-24 15:44:34.645931] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:13.243 [2024-07-24 15:44:34.645942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:13.243 [2024-07-24 15:44:34.645958] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:13.243 [2024-07-24 15:44:34.645970] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:13.243 [2024-07-24 15:44:34.645985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.243 [2024-07-24 15:44:34.645997] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:13.243 [2024-07-24 15:44:34.646012] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:13.243 [2024-07-24 15:44:34.646024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.243 [2024-07-24 15:44:34.646040] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:13.243 [2024-07-24 15:44:34.646052] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:13.243 [2024-07-24 15:44:34.646068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:13.243 [2024-07-24 15:44:34.646080] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:13.243 [2024-07-24 15:44:34.646115] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:13.243 [2024-07-24 15:44:34.646128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:13.243 [2024-07-24 15:44:34.646144] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:13.243 [2024-07-24 15:44:34.646155] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:13.243 [2024-07-24 15:44:34.646171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:13.243 [2024-07-24 15:44:34.646182] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:13.243 [2024-07-24 15:44:34.646198] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:13.243 [2024-07-24 15:44:34.646228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:13.243 [2024-07-24 15:44:34.646246] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:13.243 [2024-07-24 15:44:34.646258] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:13.243 [2024-07-24 15:44:34.646273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:13.243 [2024-07-24 15:44:34.646285] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:13.243 [2024-07-24 15:44:34.646300] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:13.243 [2024-07-24 15:44:34.646312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:13.243 [2024-07-24 15:44:34.646328] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:13.243 [2024-07-24 15:44:34.646339] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:13.243 [2024-07-24 15:44:34.646360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:13.243 [2024-07-24 15:44:34.646371] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:13.243 [2024-07-24 15:44:34.646387] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:13.243 [2024-07-24 15:44:34.646400] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:13.243 [2024-07-24 15:44:34.646423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.243 [2024-07-24 15:44:34.646436] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:13.244 [2024-07-24 15:44:34.646452] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:13.244 [2024-07-24 15:44:34.646464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:13.244 [2024-07-24 15:44:34.646480] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:13.244 [2024-07-24 15:44:34.646491] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:13.244 [2024-07-24 15:44:34.646507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:13.244 [2024-07-24 15:44:34.646520] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:13.244 [2024-07-24 15:44:34.646540] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:13.244 [2024-07-24 15:44:34.646555] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:13.244 [2024-07-24 15:44:34.646572] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:13.244 [2024-07-24 15:44:34.646584] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:13.244 [2024-07-24 15:44:34.646606] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:13.244 [2024-07-24 15:44:34.646619] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:13.244 [2024-07-24 15:44:34.646636] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:13.244 [2024-07-24 15:44:34.646648] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:13.244 [2024-07-24 15:44:34.646664] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:13.244 [2024-07-24 15:44:34.646677] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:13.244 [2024-07-24 15:44:34.646693] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:13.244 [2024-07-24 15:44:34.646705] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:13.244 [2024-07-24 15:44:34.646722] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:13.244 [2024-07-24 15:44:34.646735] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:13.244 [2024-07-24 15:44:34.646752] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:13.244 [2024-07-24 15:44:34.646766] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:13.244 [2024-07-24 15:44:34.646785] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:13.244 [2024-07-24 15:44:34.646798] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:13.244 [2024-07-24 15:44:34.646814] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:13.244 [2024-07-24 15:44:34.646828] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:13.244 [2024-07-24 15:44:34.646851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.244 [2024-07-24 15:44:34.646864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:13.244 [2024-07-24 15:44:34.646881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:19:13.244 [2024-07-24 15:44:34.646893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.244 [2024-07-24 15:44:34.666416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.244 [2024-07-24 15:44:34.666476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:13.244 [2024-07-24 15:44:34.666500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.435 ms 00:19:13.244 [2024-07-24 15:44:34.666513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.244 [2024-07-24 15:44:34.666699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.244 [2024-07-24 15:44:34.666722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:13.244 [2024-07-24 15:44:34.666738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:13.244 [2024-07-24 15:44:34.666750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.244 [2024-07-24 15:44:34.708214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.244 [2024-07-24 15:44:34.708271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:13.244 [2024-07-24 15:44:34.708298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.427 ms 00:19:13.244 [2024-07-24 15:44:34.708312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.244 [2024-07-24 15:44:34.708448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.244 [2024-07-24 15:44:34.708468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:13.244 [2024-07-24 15:44:34.708488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:13.244 [2024-07-24 15:44:34.708506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.244 [2024-07-24 15:44:34.708857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.244 [2024-07-24 15:44:34.708884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:13.244 [2024-07-24 15:44:34.708907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:19:13.244 [2024-07-24 15:44:34.708920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.244 [2024-07-24 15:44:34.709107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.244 [2024-07-24 15:44:34.709134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:13.244 [2024-07-24 15:44:34.709155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:19:13.244 [2024-07-24 15:44:34.709168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.244 [2024-07-24 15:44:34.728692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.244 [2024-07-24 15:44:34.728755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:13.244 [2024-07-24 15:44:34.728783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.468 ms 00:19:13.244 [2024-07-24 15:44:34.728797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.244 [2024-07-24 15:44:34.746189] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:13.244 [2024-07-24 15:44:34.746283] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:13.244 [2024-07-24 15:44:34.746312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.244 [2024-07-24 15:44:34.746328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:13.244 [2024-07-24 15:44:34.746351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.311 ms 00:19:13.244 [2024-07-24 15:44:34.746363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.244 [2024-07-24 15:44:34.777661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.244 [2024-07-24 15:44:34.777781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:13.244 [2024-07-24 15:44:34.777822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.066 ms 00:19:13.244 [2024-07-24 15:44:34.777836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.244 [2024-07-24 15:44:34.794643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.244 [2024-07-24 15:44:34.794732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:13.244 [2024-07-24 15:44:34.794760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.600 ms 00:19:13.244 [2024-07-24 15:44:34.794774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.244 [2024-07-24 15:44:34.811046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.244 [2024-07-24 15:44:34.811139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:13.244 [2024-07-24 15:44:34.811173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.079 ms 00:19:13.244 [2024-07-24 15:44:34.811187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.244 [2024-07-24 15:44:34.811814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.244 [2024-07-24 15:44:34.811853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:13.244 [2024-07-24 15:44:34.811877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:19:13.244 [2024-07-24 15:44:34.811890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.502 [2024-07-24 15:44:34.897733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.502 [2024-07-24 15:44:34.897814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:13.502 [2024-07-24 15:44:34.897844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.775 ms 00:19:13.502 [2024-07-24 15:44:34.897864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.502 [2024-07-24 15:44:34.911512] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:13.502 [2024-07-24 15:44:34.925915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.502 [2024-07-24 15:44:34.926003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:13.503 [2024-07-24 15:44:34.926027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.879 ms 00:19:13.503 [2024-07-24 15:44:34.926046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.503 [2024-07-24 15:44:34.926216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.503 [2024-07-24 15:44:34.926250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:13.503 [2024-07-24 15:44:34.926266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:13.503 [2024-07-24 15:44:34.926283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.503 [2024-07-24 15:44:34.926353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.503 [2024-07-24 15:44:34.926377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:13.503 [2024-07-24 15:44:34.926391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:13.503 [2024-07-24 15:44:34.926414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.503 [2024-07-24 15:44:34.929587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.503 [2024-07-24 15:44:34.929637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:13.503 [2024-07-24 15:44:34.929654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.127 ms 00:19:13.503 [2024-07-24 15:44:34.929672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.503 [2024-07-24 15:44:34.929712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.503 [2024-07-24 15:44:34.929741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:13.503 [2024-07-24 15:44:34.929755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:13.503 [2024-07-24 15:44:34.929780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.503 [2024-07-24 15:44:34.929834] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:13.503 [2024-07-24 15:44:34.929864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.503 [2024-07-24 15:44:34.929878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:13.503 [2024-07-24 15:44:34.929895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:13.503 [2024-07-24 15:44:34.929907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.503 [2024-07-24 15:44:34.961775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.503 [2024-07-24 15:44:34.961865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:13.503 [2024-07-24 15:44:34.961893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.807 ms 00:19:13.503 [2024-07-24 15:44:34.961907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.503 [2024-07-24 15:44:34.962153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.503 [2024-07-24 15:44:34.962176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:13.503 [2024-07-24 15:44:34.962197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:19:13.503 [2024-07-24 15:44:34.962211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.503 [2024-07-24 15:44:34.963565] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:13.503 [2024-07-24 15:44:34.968123] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 352.303 ms, result 0 00:19:13.503 [2024-07-24 15:44:34.969163] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:13.503 Some configs were skipped because the RPC state that can call them passed over. 00:19:13.503 15:44:35 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:13.760 [2024-07-24 15:44:35.273937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.760 [2024-07-24 15:44:35.274228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:19:13.760 [2024-07-24 15:44:35.274378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.972 ms 00:19:13.760 [2024-07-24 15:44:35.274539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.760 [2024-07-24 15:44:35.274741] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 32.766 ms, result 0 00:19:13.760 true 00:19:13.760 15:44:35 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:14.018 [2024-07-24 15:44:35.546385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.018 [2024-07-24 15:44:35.546619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:19:14.018 [2024-07-24 15:44:35.546769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.886 ms 00:19:14.018 [2024-07-24 15:44:35.546916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.018 [2024-07-24 15:44:35.547053] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 32.545 ms, result 0 00:19:14.018 true 00:19:14.018 15:44:35 -- ftl/trim.sh@81 -- # killprocess 73191 00:19:14.018 15:44:35 -- common/autotest_common.sh@926 -- # '[' -z 73191 ']' 00:19:14.018 15:44:35 -- common/autotest_common.sh@930 -- # kill -0 73191 00:19:14.018 15:44:35 -- common/autotest_common.sh@931 -- # uname 00:19:14.018 15:44:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:14.018 15:44:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73191 00:19:14.018 killing process with pid 73191 00:19:14.018 15:44:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:14.018 15:44:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:14.018 15:44:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73191' 00:19:14.018 15:44:35 -- common/autotest_common.sh@945 -- # kill 73191 00:19:14.018 15:44:35 -- common/autotest_common.sh@950 -- # wait 73191 00:19:14.951 [2024-07-24 15:44:36.530642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.951 [2024-07-24 15:44:36.530723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:14.951 [2024-07-24 15:44:36.530745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:14.951 [2024-07-24 15:44:36.530761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.951 [2024-07-24 15:44:36.530794] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:14.951 [2024-07-24 15:44:36.534131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.951 [2024-07-24 15:44:36.534173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:14.951 [2024-07-24 15:44:36.534194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.308 ms 00:19:14.951 [2024-07-24 15:44:36.534206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.951 [2024-07-24 15:44:36.534574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.951 [2024-07-24 15:44:36.534606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:14.951 [2024-07-24 15:44:36.534624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:19:14.951 [2024-07-24 15:44:36.534636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.951 [2024-07-24 15:44:36.538739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.951 [2024-07-24 15:44:36.538786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:14.951 [2024-07-24 15:44:36.538807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.071 ms 00:19:14.951 [2024-07-24 15:44:36.538822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.951 [2024-07-24 15:44:36.546481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.951 [2024-07-24 15:44:36.546516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:14.951 [2024-07-24 15:44:36.546534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.604 ms 00:19:14.951 [2024-07-24 15:44:36.546547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.209 [2024-07-24 15:44:36.559127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.209 [2024-07-24 15:44:36.559189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:15.209 [2024-07-24 15:44:36.559228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.477 ms 00:19:15.209 [2024-07-24 15:44:36.559241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.209 [2024-07-24 15:44:36.567627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.209 [2024-07-24 15:44:36.567704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:15.209 [2024-07-24 15:44:36.567731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.294 ms 00:19:15.209 [2024-07-24 15:44:36.567744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.209 [2024-07-24 15:44:36.567922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.209 [2024-07-24 15:44:36.567943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:15.209 [2024-07-24 15:44:36.567972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:19:15.209 [2024-07-24 15:44:36.567987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.209 [2024-07-24 15:44:36.580653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.209 [2024-07-24 15:44:36.580696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:15.209 [2024-07-24 15:44:36.580720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.627 ms 00:19:15.209 [2024-07-24 15:44:36.580732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.209 [2024-07-24 15:44:36.593081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.209 [2024-07-24 15:44:36.593128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:15.209 [2024-07-24 15:44:36.593160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.292 ms 00:19:15.209 [2024-07-24 15:44:36.593173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.209 [2024-07-24 15:44:36.605162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.209 [2024-07-24 15:44:36.605199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:15.209 [2024-07-24 15:44:36.605222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.934 ms 00:19:15.209 [2024-07-24 15:44:36.605235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.209 [2024-07-24 15:44:36.617230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.209 [2024-07-24 15:44:36.617268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:15.209 [2024-07-24 15:44:36.617290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.906 ms 00:19:15.209 [2024-07-24 15:44:36.617304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.209 [2024-07-24 15:44:36.617354] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:15.209 [2024-07-24 15:44:36.617378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:15.209 [2024-07-24 15:44:36.617398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:15.209 [2024-07-24 15:44:36.617412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:15.209 [2024-07-24 15:44:36.617430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:15.209 [2024-07-24 15:44:36.617443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:15.209 [2024-07-24 15:44:36.617465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:15.209 [2024-07-24 15:44:36.617478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:15.209 [2024-07-24 15:44:36.617495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:15.209 [2024-07-24 15:44:36.617510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.617998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:15.210 [2024-07-24 15:44:36.618821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:15.211 [2024-07-24 15:44:36.618834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:15.211 [2024-07-24 15:44:36.618852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:15.211 [2024-07-24 15:44:36.618865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:15.211 [2024-07-24 15:44:36.618882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:15.211 [2024-07-24 15:44:36.618895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:15.211 [2024-07-24 15:44:36.618912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:15.211 [2024-07-24 15:44:36.618933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:15.211 [2024-07-24 15:44:36.618953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:15.211 [2024-07-24 15:44:36.618975] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:15.211 [2024-07-24 15:44:36.619016] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 610a91da-4184-4e70-9b70-5012476db97f 00:19:15.211 [2024-07-24 15:44:36.619035] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:15.211 [2024-07-24 15:44:36.619051] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:15.211 [2024-07-24 15:44:36.619063] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:15.211 [2024-07-24 15:44:36.619080] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:15.211 [2024-07-24 15:44:36.619104] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:15.211 [2024-07-24 15:44:36.619123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:15.211 [2024-07-24 15:44:36.619136] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:15.211 [2024-07-24 15:44:36.619151] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:15.211 [2024-07-24 15:44:36.619162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:15.211 [2024-07-24 15:44:36.619178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.211 [2024-07-24 15:44:36.619192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:15.211 [2024-07-24 15:44:36.619210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.829 ms 00:19:15.211 [2024-07-24 15:44:36.619222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.211 [2024-07-24 15:44:36.635722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.211 [2024-07-24 15:44:36.635762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:15.211 [2024-07-24 15:44:36.635791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.431 ms 00:19:15.211 [2024-07-24 15:44:36.635804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.211 [2024-07-24 15:44:36.636082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.211 [2024-07-24 15:44:36.636129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:15.211 [2024-07-24 15:44:36.636151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:19:15.211 [2024-07-24 15:44:36.636164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.211 [2024-07-24 15:44:36.694537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.211 [2024-07-24 15:44:36.694594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:15.211 [2024-07-24 15:44:36.694618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.211 [2024-07-24 15:44:36.694631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.211 [2024-07-24 15:44:36.694753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.211 [2024-07-24 15:44:36.694772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:15.211 [2024-07-24 15:44:36.694791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.211 [2024-07-24 15:44:36.694804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.211 [2024-07-24 15:44:36.694888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.211 [2024-07-24 15:44:36.694907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:15.211 [2024-07-24 15:44:36.694940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.211 [2024-07-24 15:44:36.694955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.211 [2024-07-24 15:44:36.694989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.211 [2024-07-24 15:44:36.695004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:15.211 [2024-07-24 15:44:36.695021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.211 [2024-07-24 15:44:36.695034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.469 [2024-07-24 15:44:36.839152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.469 [2024-07-24 15:44:36.839229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:15.469 [2024-07-24 15:44:36.839261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.469 [2024-07-24 15:44:36.839276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.469 [2024-07-24 15:44:36.878002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.469 [2024-07-24 15:44:36.878054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:15.469 [2024-07-24 15:44:36.878080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.469 [2024-07-24 15:44:36.878116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.469 [2024-07-24 15:44:36.878226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.469 [2024-07-24 15:44:36.878246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:15.469 [2024-07-24 15:44:36.878271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.469 [2024-07-24 15:44:36.878284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.469 [2024-07-24 15:44:36.878329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.469 [2024-07-24 15:44:36.878344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:15.469 [2024-07-24 15:44:36.878361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.469 [2024-07-24 15:44:36.878374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.469 [2024-07-24 15:44:36.878511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.469 [2024-07-24 15:44:36.878535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:15.469 [2024-07-24 15:44:36.878554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.469 [2024-07-24 15:44:36.878567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.469 [2024-07-24 15:44:36.878630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.469 [2024-07-24 15:44:36.878648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:15.469 [2024-07-24 15:44:36.878666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.469 [2024-07-24 15:44:36.878678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.469 [2024-07-24 15:44:36.878734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.469 [2024-07-24 15:44:36.878762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:15.469 [2024-07-24 15:44:36.878785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.469 [2024-07-24 15:44:36.878798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.469 [2024-07-24 15:44:36.878864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.469 [2024-07-24 15:44:36.878882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:15.470 [2024-07-24 15:44:36.878901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.470 [2024-07-24 15:44:36.878914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.470 [2024-07-24 15:44:36.879134] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 348.432 ms, result 0 00:19:16.401 15:44:37 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:16.401 15:44:37 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:16.658 [2024-07-24 15:44:38.056354] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:16.658 [2024-07-24 15:44:38.056487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73267 ] 00:19:16.658 [2024-07-24 15:44:38.214937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.915 [2024-07-24 15:44:38.404747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.172 [2024-07-24 15:44:38.714277] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:17.173 [2024-07-24 15:44:38.714362] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:17.431 [2024-07-24 15:44:38.868687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.431 [2024-07-24 15:44:38.868768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:17.431 [2024-07-24 15:44:38.868791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:17.431 [2024-07-24 15:44:38.868810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.431 [2024-07-24 15:44:38.872175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.431 [2024-07-24 15:44:38.872231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:17.431 [2024-07-24 15:44:38.872251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.332 ms 00:19:17.431 [2024-07-24 15:44:38.872270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.431 [2024-07-24 15:44:38.872421] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:17.431 [2024-07-24 15:44:38.873411] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:17.431 [2024-07-24 15:44:38.873457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.431 [2024-07-24 15:44:38.873479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:17.431 [2024-07-24 15:44:38.873493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.049 ms 00:19:17.431 [2024-07-24 15:44:38.873506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.431 [2024-07-24 15:44:38.874825] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:17.431 [2024-07-24 15:44:38.891554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.431 [2024-07-24 15:44:38.891627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:17.431 [2024-07-24 15:44:38.891650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.729 ms 00:19:17.431 [2024-07-24 15:44:38.891664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.431 [2024-07-24 15:44:38.891803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.431 [2024-07-24 15:44:38.891827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:17.431 [2024-07-24 15:44:38.891847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:17.431 [2024-07-24 15:44:38.891860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.431 [2024-07-24 15:44:38.896526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.431 [2024-07-24 15:44:38.896573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:17.431 [2024-07-24 15:44:38.896591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.598 ms 00:19:17.431 [2024-07-24 15:44:38.896604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.431 [2024-07-24 15:44:38.896753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.431 [2024-07-24 15:44:38.896780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:17.431 [2024-07-24 15:44:38.896795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:19:17.431 [2024-07-24 15:44:38.896807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.431 [2024-07-24 15:44:38.896852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.431 [2024-07-24 15:44:38.896870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:17.431 [2024-07-24 15:44:38.896884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:17.431 [2024-07-24 15:44:38.896896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.431 [2024-07-24 15:44:38.896938] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:17.431 [2024-07-24 15:44:38.901273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.431 [2024-07-24 15:44:38.901328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:17.431 [2024-07-24 15:44:38.901355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.351 ms 00:19:17.431 [2024-07-24 15:44:38.901375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.431 [2024-07-24 15:44:38.901474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.431 [2024-07-24 15:44:38.901510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:17.431 [2024-07-24 15:44:38.901533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:17.431 [2024-07-24 15:44:38.901554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.431 [2024-07-24 15:44:38.901607] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:17.431 [2024-07-24 15:44:38.901646] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:17.431 [2024-07-24 15:44:38.901689] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:17.431 [2024-07-24 15:44:38.901710] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:17.431 [2024-07-24 15:44:38.901797] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:17.431 [2024-07-24 15:44:38.901814] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:17.431 [2024-07-24 15:44:38.901829] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:17.431 [2024-07-24 15:44:38.901845] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:17.431 [2024-07-24 15:44:38.901861] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:17.431 [2024-07-24 15:44:38.901876] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:17.431 [2024-07-24 15:44:38.901900] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:17.431 [2024-07-24 15:44:38.901912] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:17.431 [2024-07-24 15:44:38.901924] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:17.431 [2024-07-24 15:44:38.901937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.432 [2024-07-24 15:44:38.901955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:17.432 [2024-07-24 15:44:38.901969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:19:17.432 [2024-07-24 15:44:38.901981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.432 [2024-07-24 15:44:38.902119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.432 [2024-07-24 15:44:38.902141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:17.432 [2024-07-24 15:44:38.902156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:17.432 [2024-07-24 15:44:38.902169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.432 [2024-07-24 15:44:38.902264] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:17.432 [2024-07-24 15:44:38.902283] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:17.432 [2024-07-24 15:44:38.902297] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:17.432 [2024-07-24 15:44:38.902316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.432 [2024-07-24 15:44:38.902329] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:17.432 [2024-07-24 15:44:38.902340] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:17.432 [2024-07-24 15:44:38.902353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:17.432 [2024-07-24 15:44:38.902366] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:17.432 [2024-07-24 15:44:38.902378] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:17.432 [2024-07-24 15:44:38.902390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:17.432 [2024-07-24 15:44:38.902402] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:17.432 [2024-07-24 15:44:38.902413] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:17.432 [2024-07-24 15:44:38.902426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:17.432 [2024-07-24 15:44:38.902438] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:17.432 [2024-07-24 15:44:38.902449] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:17.432 [2024-07-24 15:44:38.902461] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.432 [2024-07-24 15:44:38.902473] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:17.432 [2024-07-24 15:44:38.902485] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:17.432 [2024-07-24 15:44:38.902497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.432 [2024-07-24 15:44:38.902522] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:17.432 [2024-07-24 15:44:38.902534] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:17.432 [2024-07-24 15:44:38.902547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:17.432 [2024-07-24 15:44:38.902560] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:17.432 [2024-07-24 15:44:38.902572] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:17.432 [2024-07-24 15:44:38.902584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:17.432 [2024-07-24 15:44:38.902595] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:17.432 [2024-07-24 15:44:38.902607] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:17.432 [2024-07-24 15:44:38.902619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:17.432 [2024-07-24 15:44:38.902630] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:17.432 [2024-07-24 15:44:38.902642] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:17.432 [2024-07-24 15:44:38.902654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:17.432 [2024-07-24 15:44:38.902665] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:17.432 [2024-07-24 15:44:38.902677] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:17.432 [2024-07-24 15:44:38.902689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:17.432 [2024-07-24 15:44:38.902700] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:17.432 [2024-07-24 15:44:38.902712] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:17.432 [2024-07-24 15:44:38.902724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:17.432 [2024-07-24 15:44:38.902735] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:17.432 [2024-07-24 15:44:38.902747] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:17.432 [2024-07-24 15:44:38.902759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:17.432 [2024-07-24 15:44:38.902770] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:17.432 [2024-07-24 15:44:38.902783] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:17.432 [2024-07-24 15:44:38.902795] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:17.432 [2024-07-24 15:44:38.902808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.432 [2024-07-24 15:44:38.902822] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:17.432 [2024-07-24 15:44:38.902834] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:17.432 [2024-07-24 15:44:38.902846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:17.432 [2024-07-24 15:44:38.902858] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:17.432 [2024-07-24 15:44:38.902869] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:17.432 [2024-07-24 15:44:38.902881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:17.432 [2024-07-24 15:44:38.902895] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:17.432 [2024-07-24 15:44:38.902918] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:17.432 [2024-07-24 15:44:38.902945] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:17.432 [2024-07-24 15:44:38.902958] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:17.432 [2024-07-24 15:44:38.902972] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:17.432 [2024-07-24 15:44:38.902985] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:17.432 [2024-07-24 15:44:38.902997] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:17.432 [2024-07-24 15:44:38.903009] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:17.432 [2024-07-24 15:44:38.903022] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:17.432 [2024-07-24 15:44:38.903037] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:17.432 [2024-07-24 15:44:38.903056] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:17.432 [2024-07-24 15:44:38.903070] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:17.432 [2024-07-24 15:44:38.903082] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:17.432 [2024-07-24 15:44:38.903555] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:17.432 [2024-07-24 15:44:38.903622] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:17.432 [2024-07-24 15:44:38.903818] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:17.432 [2024-07-24 15:44:38.903996] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:17.432 [2024-07-24 15:44:38.904064] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:17.432 [2024-07-24 15:44:38.904217] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:17.432 [2024-07-24 15:44:38.904281] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:17.432 [2024-07-24 15:44:38.904481] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:17.432 [2024-07-24 15:44:38.904627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.432 [2024-07-24 15:44:38.904838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:17.432 [2024-07-24 15:44:38.904895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.407 ms 00:19:17.432 [2024-07-24 15:44:38.905055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.432 [2024-07-24 15:44:38.923714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.432 [2024-07-24 15:44:38.923986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:17.432 [2024-07-24 15:44:38.924144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.488 ms 00:19:17.432 [2024-07-24 15:44:38.924171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.432 [2024-07-24 15:44:38.924369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.432 [2024-07-24 15:44:38.924392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:17.432 [2024-07-24 15:44:38.924408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:17.432 [2024-07-24 15:44:38.924420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.432 [2024-07-24 15:44:38.978189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.432 [2024-07-24 15:44:38.978261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:17.432 [2024-07-24 15:44:38.978285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.731 ms 00:19:17.432 [2024-07-24 15:44:38.978299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.432 [2024-07-24 15:44:38.978442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.433 [2024-07-24 15:44:38.978463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:17.433 [2024-07-24 15:44:38.978478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:17.433 [2024-07-24 15:44:38.978491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.433 [2024-07-24 15:44:38.978859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.433 [2024-07-24 15:44:38.978880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:17.433 [2024-07-24 15:44:38.978894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:19:17.433 [2024-07-24 15:44:38.978907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.433 [2024-07-24 15:44:38.979082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.433 [2024-07-24 15:44:38.979128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:17.433 [2024-07-24 15:44:38.979143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:19:17.433 [2024-07-24 15:44:38.979155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.433 [2024-07-24 15:44:38.996826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.433 [2024-07-24 15:44:38.996897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:17.433 [2024-07-24 15:44:38.996930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.635 ms 00:19:17.433 [2024-07-24 15:44:38.996954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.433 [2024-07-24 15:44:39.013520] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:17.433 [2024-07-24 15:44:39.013571] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:17.433 [2024-07-24 15:44:39.013591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.433 [2024-07-24 15:44:39.013605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:17.433 [2024-07-24 15:44:39.013619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.389 ms 00:19:17.433 [2024-07-24 15:44:39.013632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.691 [2024-07-24 15:44:39.044305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.691 [2024-07-24 15:44:39.044383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:17.691 [2024-07-24 15:44:39.044405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.571 ms 00:19:17.691 [2024-07-24 15:44:39.044428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.691 [2024-07-24 15:44:39.061138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.691 [2024-07-24 15:44:39.061194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:17.691 [2024-07-24 15:44:39.061214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.570 ms 00:19:17.691 [2024-07-24 15:44:39.061228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.691 [2024-07-24 15:44:39.076838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.691 [2024-07-24 15:44:39.076900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:17.691 [2024-07-24 15:44:39.076919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.508 ms 00:19:17.691 [2024-07-24 15:44:39.076932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.691 [2024-07-24 15:44:39.077482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.691 [2024-07-24 15:44:39.077526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:17.691 [2024-07-24 15:44:39.077545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:19:17.691 [2024-07-24 15:44:39.077558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.691 [2024-07-24 15:44:39.155947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.691 [2024-07-24 15:44:39.156027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:17.691 [2024-07-24 15:44:39.156051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.347 ms 00:19:17.691 [2024-07-24 15:44:39.156065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.691 [2024-07-24 15:44:39.169427] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:17.691 [2024-07-24 15:44:39.183755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.691 [2024-07-24 15:44:39.183831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:17.691 [2024-07-24 15:44:39.183855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.475 ms 00:19:17.691 [2024-07-24 15:44:39.183869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.691 [2024-07-24 15:44:39.184014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.691 [2024-07-24 15:44:39.184036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:17.691 [2024-07-24 15:44:39.184050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:17.691 [2024-07-24 15:44:39.184063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.691 [2024-07-24 15:44:39.184168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.692 [2024-07-24 15:44:39.184196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:17.692 [2024-07-24 15:44:39.184211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:17.692 [2024-07-24 15:44:39.184224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.692 [2024-07-24 15:44:39.186132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.692 [2024-07-24 15:44:39.186175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:17.692 [2024-07-24 15:44:39.186193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.874 ms 00:19:17.692 [2024-07-24 15:44:39.186205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.692 [2024-07-24 15:44:39.186249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.692 [2024-07-24 15:44:39.186265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:17.692 [2024-07-24 15:44:39.186279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:17.692 [2024-07-24 15:44:39.186305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.692 [2024-07-24 15:44:39.186361] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:17.692 [2024-07-24 15:44:39.186385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.692 [2024-07-24 15:44:39.186398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:17.692 [2024-07-24 15:44:39.186411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:17.692 [2024-07-24 15:44:39.186424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.692 [2024-07-24 15:44:39.217750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.692 [2024-07-24 15:44:39.217815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:17.692 [2024-07-24 15:44:39.217846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.288 ms 00:19:17.692 [2024-07-24 15:44:39.217860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.692 [2024-07-24 15:44:39.218013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.692 [2024-07-24 15:44:39.218035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:17.692 [2024-07-24 15:44:39.218051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:17.692 [2024-07-24 15:44:39.218064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.692 [2024-07-24 15:44:39.219108] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:17.692 [2024-07-24 15:44:39.223395] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 350.059 ms, result 0 00:19:17.692 [2024-07-24 15:44:39.224149] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:17.692 [2024-07-24 15:44:39.241156] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:28.831  Copying: 27/256 [MB] (27 MBps) Copying: 53/256 [MB] (25 MBps) Copying: 77/256 [MB] (24 MBps) Copying: 100/256 [MB] (22 MBps) Copying: 122/256 [MB] (22 MBps) Copying: 148/256 [MB] (25 MBps) Copying: 169/256 [MB] (21 MBps) Copying: 188/256 [MB] (18 MBps) Copying: 213/256 [MB] (24 MBps) Copying: 237/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-07-24 15:44:50.183617] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:28.831 [2024-07-24 15:44:50.195845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.831 [2024-07-24 15:44:50.195896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:28.831 [2024-07-24 15:44:50.195919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:28.831 [2024-07-24 15:44:50.195942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.831 [2024-07-24 15:44:50.195977] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:28.831 [2024-07-24 15:44:50.199306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.831 [2024-07-24 15:44:50.199343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:28.831 [2024-07-24 15:44:50.199360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.305 ms 00:19:28.831 [2024-07-24 15:44:50.199372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.831 [2024-07-24 15:44:50.199679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.831 [2024-07-24 15:44:50.199703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:28.831 [2024-07-24 15:44:50.199719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:19:28.831 [2024-07-24 15:44:50.199731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.831 [2024-07-24 15:44:50.203545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.831 [2024-07-24 15:44:50.203585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:28.831 [2024-07-24 15:44:50.203601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.789 ms 00:19:28.831 [2024-07-24 15:44:50.203613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.831 [2024-07-24 15:44:50.211170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.831 [2024-07-24 15:44:50.211206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:28.831 [2024-07-24 15:44:50.211223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.512 ms 00:19:28.831 [2024-07-24 15:44:50.211235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.831 [2024-07-24 15:44:50.242127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.831 [2024-07-24 15:44:50.242175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:28.831 [2024-07-24 15:44:50.242195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.805 ms 00:19:28.831 [2024-07-24 15:44:50.242208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.831 [2024-07-24 15:44:50.260427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.831 [2024-07-24 15:44:50.260512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:28.831 [2024-07-24 15:44:50.260545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.126 ms 00:19:28.831 [2024-07-24 15:44:50.260559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.831 [2024-07-24 15:44:50.260776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.831 [2024-07-24 15:44:50.260801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:28.831 [2024-07-24 15:44:50.260816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:19:28.831 [2024-07-24 15:44:50.260829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.831 [2024-07-24 15:44:50.293367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.831 [2024-07-24 15:44:50.293426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:28.831 [2024-07-24 15:44:50.293463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.511 ms 00:19:28.831 [2024-07-24 15:44:50.293483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.831 [2024-07-24 15:44:50.325553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.831 [2024-07-24 15:44:50.325610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:28.831 [2024-07-24 15:44:50.325632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.955 ms 00:19:28.831 [2024-07-24 15:44:50.325646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.831 [2024-07-24 15:44:50.357218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.831 [2024-07-24 15:44:50.357271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:28.831 [2024-07-24 15:44:50.357293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.476 ms 00:19:28.831 [2024-07-24 15:44:50.357314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.832 [2024-07-24 15:44:50.389079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.832 [2024-07-24 15:44:50.389163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:28.832 [2024-07-24 15:44:50.389192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.627 ms 00:19:28.832 [2024-07-24 15:44:50.389205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.832 [2024-07-24 15:44:50.389310] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:28.832 [2024-07-24 15:44:50.389339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.389990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:28.832 [2024-07-24 15:44:50.390403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:28.833 [2024-07-24 15:44:50.390725] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:28.833 [2024-07-24 15:44:50.390754] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 610a91da-4184-4e70-9b70-5012476db97f 00:19:28.833 [2024-07-24 15:44:50.390767] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:28.833 [2024-07-24 15:44:50.390780] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:28.833 [2024-07-24 15:44:50.390798] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:28.833 [2024-07-24 15:44:50.390812] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:28.833 [2024-07-24 15:44:50.390823] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:28.833 [2024-07-24 15:44:50.390836] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:28.833 [2024-07-24 15:44:50.390847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:28.833 [2024-07-24 15:44:50.390858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:28.833 [2024-07-24 15:44:50.390874] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:28.833 [2024-07-24 15:44:50.390889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.833 [2024-07-24 15:44:50.390909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:28.833 [2024-07-24 15:44:50.390937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.582 ms 00:19:28.833 [2024-07-24 15:44:50.390953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.833 [2024-07-24 15:44:50.408226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.833 [2024-07-24 15:44:50.408281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:28.833 [2024-07-24 15:44:50.408304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.233 ms 00:19:28.833 [2024-07-24 15:44:50.408319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.833 [2024-07-24 15:44:50.408661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.833 [2024-07-24 15:44:50.408693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:28.833 [2024-07-24 15:44:50.408710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:19:28.833 [2024-07-24 15:44:50.408722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.091 [2024-07-24 15:44:50.460736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.091 [2024-07-24 15:44:50.460810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:29.091 [2024-07-24 15:44:50.460839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.091 [2024-07-24 15:44:50.460853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.091 [2024-07-24 15:44:50.461019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.091 [2024-07-24 15:44:50.461040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:29.091 [2024-07-24 15:44:50.461054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.091 [2024-07-24 15:44:50.461066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.091 [2024-07-24 15:44:50.461161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.091 [2024-07-24 15:44:50.461183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:29.091 [2024-07-24 15:44:50.461197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.091 [2024-07-24 15:44:50.461210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.091 [2024-07-24 15:44:50.461238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.091 [2024-07-24 15:44:50.461260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:29.091 [2024-07-24 15:44:50.461273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.091 [2024-07-24 15:44:50.461286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.091 [2024-07-24 15:44:50.565493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.091 [2024-07-24 15:44:50.565566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:29.091 [2024-07-24 15:44:50.565587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.091 [2024-07-24 15:44:50.565601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.091 [2024-07-24 15:44:50.608548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.091 [2024-07-24 15:44:50.608624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:29.091 [2024-07-24 15:44:50.608647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.091 [2024-07-24 15:44:50.608661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.091 [2024-07-24 15:44:50.608788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.091 [2024-07-24 15:44:50.608818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:29.091 [2024-07-24 15:44:50.608834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.091 [2024-07-24 15:44:50.608850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.091 [2024-07-24 15:44:50.608899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.091 [2024-07-24 15:44:50.608919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:29.091 [2024-07-24 15:44:50.608957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.091 [2024-07-24 15:44:50.608981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.091 [2024-07-24 15:44:50.609157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.091 [2024-07-24 15:44:50.609181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:29.091 [2024-07-24 15:44:50.609195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.091 [2024-07-24 15:44:50.609208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.091 [2024-07-24 15:44:50.609268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.091 [2024-07-24 15:44:50.609287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:29.091 [2024-07-24 15:44:50.609301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.091 [2024-07-24 15:44:50.609320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.091 [2024-07-24 15:44:50.609369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.091 [2024-07-24 15:44:50.609385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:29.091 [2024-07-24 15:44:50.609398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.091 [2024-07-24 15:44:50.609411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.091 [2024-07-24 15:44:50.609471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.091 [2024-07-24 15:44:50.609515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:29.091 [2024-07-24 15:44:50.609540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.091 [2024-07-24 15:44:50.609556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.091 [2024-07-24 15:44:50.609739] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 413.903 ms, result 0 00:19:30.489 00:19:30.489 00:19:30.489 15:44:51 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:30.489 15:44:51 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:30.746 15:44:52 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:31.004 [2024-07-24 15:44:52.414677] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:31.004 [2024-07-24 15:44:52.414821] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73412 ] 00:19:31.004 [2024-07-24 15:44:52.570018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.261 [2024-07-24 15:44:52.761339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.520 [2024-07-24 15:44:53.073705] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:31.520 [2024-07-24 15:44:53.073804] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:31.778 [2024-07-24 15:44:53.228928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.778 [2024-07-24 15:44:53.229023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:31.778 [2024-07-24 15:44:53.229062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:31.778 [2024-07-24 15:44:53.229116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.778 [2024-07-24 15:44:53.232538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.778 [2024-07-24 15:44:53.232590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:31.778 [2024-07-24 15:44:53.232624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.370 ms 00:19:31.778 [2024-07-24 15:44:53.232654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.778 [2024-07-24 15:44:53.232886] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:31.778 [2024-07-24 15:44:53.233956] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:31.778 [2024-07-24 15:44:53.234006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.778 [2024-07-24 15:44:53.234038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:31.778 [2024-07-24 15:44:53.234061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.135 ms 00:19:31.778 [2024-07-24 15:44:53.234079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.778 [2024-07-24 15:44:53.235559] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:31.778 [2024-07-24 15:44:53.251907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.778 [2024-07-24 15:44:53.251957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:31.778 [2024-07-24 15:44:53.251995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.350 ms 00:19:31.778 [2024-07-24 15:44:53.252017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.778 [2024-07-24 15:44:53.252222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.778 [2024-07-24 15:44:53.252255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:31.778 [2024-07-24 15:44:53.252283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:31.778 [2024-07-24 15:44:53.252303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.778 [2024-07-24 15:44:53.256753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.778 [2024-07-24 15:44:53.256806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:31.778 [2024-07-24 15:44:53.256833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.358 ms 00:19:31.778 [2024-07-24 15:44:53.256852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.778 [2024-07-24 15:44:53.257056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.778 [2024-07-24 15:44:53.257120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:31.778 [2024-07-24 15:44:53.257146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:19:31.778 [2024-07-24 15:44:53.257166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.778 [2024-07-24 15:44:53.257230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.778 [2024-07-24 15:44:53.257256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:31.778 [2024-07-24 15:44:53.257277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:31.778 [2024-07-24 15:44:53.257296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.778 [2024-07-24 15:44:53.257362] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:31.778 [2024-07-24 15:44:53.261687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.778 [2024-07-24 15:44:53.261732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:31.778 [2024-07-24 15:44:53.261758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.348 ms 00:19:31.778 [2024-07-24 15:44:53.261779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.778 [2024-07-24 15:44:53.261877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.778 [2024-07-24 15:44:53.261912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:31.778 [2024-07-24 15:44:53.261934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:31.778 [2024-07-24 15:44:53.261953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.778 [2024-07-24 15:44:53.262004] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:31.778 [2024-07-24 15:44:53.262048] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:31.778 [2024-07-24 15:44:53.262150] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:31.778 [2024-07-24 15:44:53.262192] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:31.778 [2024-07-24 15:44:53.262311] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:31.778 [2024-07-24 15:44:53.262340] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:31.778 [2024-07-24 15:44:53.262365] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:31.778 [2024-07-24 15:44:53.262397] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:31.778 [2024-07-24 15:44:53.262422] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:31.778 [2024-07-24 15:44:53.262442] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:31.778 [2024-07-24 15:44:53.262461] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:31.778 [2024-07-24 15:44:53.262479] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:31.778 [2024-07-24 15:44:53.262497] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:31.778 [2024-07-24 15:44:53.262517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.778 [2024-07-24 15:44:53.262544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:31.778 [2024-07-24 15:44:53.262565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:19:31.778 [2024-07-24 15:44:53.262585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.778 [2024-07-24 15:44:53.262731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.778 [2024-07-24 15:44:53.262769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:31.778 [2024-07-24 15:44:53.262792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:31.778 [2024-07-24 15:44:53.262812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.778 [2024-07-24 15:44:53.262957] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:31.778 [2024-07-24 15:44:53.262989] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:31.778 [2024-07-24 15:44:53.263010] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.778 [2024-07-24 15:44:53.263037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.778 [2024-07-24 15:44:53.263057] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:31.778 [2024-07-24 15:44:53.263074] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:31.778 [2024-07-24 15:44:53.263112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:31.778 [2024-07-24 15:44:53.263133] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:31.778 [2024-07-24 15:44:53.263153] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:31.778 [2024-07-24 15:44:53.263172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.778 [2024-07-24 15:44:53.263189] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:31.778 [2024-07-24 15:44:53.263207] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:31.778 [2024-07-24 15:44:53.263226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.778 [2024-07-24 15:44:53.263242] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:31.778 [2024-07-24 15:44:53.263260] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:31.778 [2024-07-24 15:44:53.263279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.778 [2024-07-24 15:44:53.263298] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:31.778 [2024-07-24 15:44:53.263315] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:31.778 [2024-07-24 15:44:53.263332] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.778 [2024-07-24 15:44:53.263370] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:31.778 [2024-07-24 15:44:53.263390] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:31.778 [2024-07-24 15:44:53.263409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:31.778 [2024-07-24 15:44:53.263428] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:31.778 [2024-07-24 15:44:53.263447] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:31.778 [2024-07-24 15:44:53.263464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:31.778 [2024-07-24 15:44:53.263483] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:31.778 [2024-07-24 15:44:53.263500] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:31.778 [2024-07-24 15:44:53.263518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:31.778 [2024-07-24 15:44:53.263536] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:31.778 [2024-07-24 15:44:53.263556] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:31.778 [2024-07-24 15:44:53.263575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:31.778 [2024-07-24 15:44:53.263594] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:31.778 [2024-07-24 15:44:53.263613] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:31.778 [2024-07-24 15:44:53.263633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:31.778 [2024-07-24 15:44:53.263651] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:31.778 [2024-07-24 15:44:53.263671] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:31.778 [2024-07-24 15:44:53.263689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.778 [2024-07-24 15:44:53.263706] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:31.778 [2024-07-24 15:44:53.263724] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:31.778 [2024-07-24 15:44:53.263741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.778 [2024-07-24 15:44:53.263758] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:31.778 [2024-07-24 15:44:53.263778] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:31.778 [2024-07-24 15:44:53.263798] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.778 [2024-07-24 15:44:53.263816] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.778 [2024-07-24 15:44:53.263836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:31.778 [2024-07-24 15:44:53.263855] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:31.778 [2024-07-24 15:44:53.263872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:31.778 [2024-07-24 15:44:53.263890] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:31.778 [2024-07-24 15:44:53.263908] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:31.778 [2024-07-24 15:44:53.263926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:31.778 [2024-07-24 15:44:53.263945] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:31.778 [2024-07-24 15:44:53.263977] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.778 [2024-07-24 15:44:53.263997] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:31.778 [2024-07-24 15:44:53.264018] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:31.778 [2024-07-24 15:44:53.264038] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:31.778 [2024-07-24 15:44:53.264056] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:31.778 [2024-07-24 15:44:53.264076] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:31.778 [2024-07-24 15:44:53.264112] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:31.778 [2024-07-24 15:44:53.264134] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:31.778 [2024-07-24 15:44:53.264155] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:31.778 [2024-07-24 15:44:53.264174] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:31.778 [2024-07-24 15:44:53.264193] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:31.778 [2024-07-24 15:44:53.264213] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:31.778 [2024-07-24 15:44:53.264236] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:31.778 [2024-07-24 15:44:53.264258] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:31.778 [2024-07-24 15:44:53.264277] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:31.778 [2024-07-24 15:44:53.264298] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.778 [2024-07-24 15:44:53.264318] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:31.778 [2024-07-24 15:44:53.264338] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:31.778 [2024-07-24 15:44:53.264359] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:31.778 [2024-07-24 15:44:53.264379] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:31.778 [2024-07-24 15:44:53.264400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.778 [2024-07-24 15:44:53.264431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:31.778 [2024-07-24 15:44:53.264451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.520 ms 00:19:31.778 [2024-07-24 15:44:53.264468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.778 [2024-07-24 15:44:53.283502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.779 [2024-07-24 15:44:53.283693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:31.779 [2024-07-24 15:44:53.283869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.937 ms 00:19:31.779 [2024-07-24 15:44:53.284027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.779 [2024-07-24 15:44:53.284429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.779 [2024-07-24 15:44:53.284576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:31.779 [2024-07-24 15:44:53.284731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:31.779 [2024-07-24 15:44:53.284881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.779 [2024-07-24 15:44:53.341417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.779 [2024-07-24 15:44:53.341674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:31.779 [2024-07-24 15:44:53.341812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.347 ms 00:19:31.779 [2024-07-24 15:44:53.341940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.779 [2024-07-24 15:44:53.342138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.779 [2024-07-24 15:44:53.342224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:31.779 [2024-07-24 15:44:53.342346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:31.779 [2024-07-24 15:44:53.342407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.779 [2024-07-24 15:44:53.342855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.779 [2024-07-24 15:44:53.343018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:31.779 [2024-07-24 15:44:53.343180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:19:31.779 [2024-07-24 15:44:53.343303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.779 [2024-07-24 15:44:53.343544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.779 [2024-07-24 15:44:53.343635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:31.779 [2024-07-24 15:44:53.343751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:19:31.779 [2024-07-24 15:44:53.343872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.779 [2024-07-24 15:44:53.361760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.779 [2024-07-24 15:44:53.361961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:31.779 [2024-07-24 15:44:53.362115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.803 ms 00:19:31.779 [2024-07-24 15:44:53.362271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.037 [2024-07-24 15:44:53.379204] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:32.037 [2024-07-24 15:44:53.379439] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:32.037 [2024-07-24 15:44:53.379582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.037 [2024-07-24 15:44:53.379630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:32.037 [2024-07-24 15:44:53.379740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.055 ms 00:19:32.037 [2024-07-24 15:44:53.379861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.037 [2024-07-24 15:44:53.410540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.037 [2024-07-24 15:44:53.410753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:32.037 [2024-07-24 15:44:53.410888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.513 ms 00:19:32.037 [2024-07-24 15:44:53.410937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.037 [2024-07-24 15:44:53.426942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.037 [2024-07-24 15:44:53.426998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:32.037 [2024-07-24 15:44:53.427020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.869 ms 00:19:32.037 [2024-07-24 15:44:53.427032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.037 [2024-07-24 15:44:53.443149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.037 [2024-07-24 15:44:53.443245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:32.037 [2024-07-24 15:44:53.443266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.984 ms 00:19:32.037 [2024-07-24 15:44:53.443278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.037 [2024-07-24 15:44:53.443850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.037 [2024-07-24 15:44:53.443888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:32.037 [2024-07-24 15:44:53.443906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:19:32.037 [2024-07-24 15:44:53.443917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.037 [2024-07-24 15:44:53.522577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.037 [2024-07-24 15:44:53.522653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:32.037 [2024-07-24 15:44:53.522676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.623 ms 00:19:32.037 [2024-07-24 15:44:53.522688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.037 [2024-07-24 15:44:53.537658] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:32.037 [2024-07-24 15:44:53.554435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.037 [2024-07-24 15:44:53.554519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:32.037 [2024-07-24 15:44:53.554543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.575 ms 00:19:32.037 [2024-07-24 15:44:53.554558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.037 [2024-07-24 15:44:53.554714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.037 [2024-07-24 15:44:53.554739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:32.037 [2024-07-24 15:44:53.554756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:32.037 [2024-07-24 15:44:53.554770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.037 [2024-07-24 15:44:53.554853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.037 [2024-07-24 15:44:53.554887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:32.037 [2024-07-24 15:44:53.554904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:32.037 [2024-07-24 15:44:53.554934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.037 [2024-07-24 15:44:53.557269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.037 [2024-07-24 15:44:53.557324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:32.037 [2024-07-24 15:44:53.557344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.290 ms 00:19:32.037 [2024-07-24 15:44:53.557358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.037 [2024-07-24 15:44:53.557408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.037 [2024-07-24 15:44:53.557427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:32.037 [2024-07-24 15:44:53.557442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:32.037 [2024-07-24 15:44:53.557463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.037 [2024-07-24 15:44:53.557520] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:32.037 [2024-07-24 15:44:53.557551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.037 [2024-07-24 15:44:53.557574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:32.037 [2024-07-24 15:44:53.557590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:32.037 [2024-07-24 15:44:53.557603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.037 [2024-07-24 15:44:53.596407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.037 [2024-07-24 15:44:53.596508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:32.037 [2024-07-24 15:44:53.596550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.758 ms 00:19:32.037 [2024-07-24 15:44:53.596567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.037 [2024-07-24 15:44:53.596790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.037 [2024-07-24 15:44:53.596817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:32.037 [2024-07-24 15:44:53.596834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:32.037 [2024-07-24 15:44:53.596855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.037 [2024-07-24 15:44:53.598235] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:32.038 [2024-07-24 15:44:53.603947] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 368.859 ms, result 0 00:19:32.038 [2024-07-24 15:44:53.604964] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:32.038 [2024-07-24 15:44:53.625774] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:32.295  Copying: 4096/4096 [kB] (average 27 MBps)[2024-07-24 15:44:53.780128] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:32.295 [2024-07-24 15:44:53.792606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-07-24 15:44:53.792660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:32.295 [2024-07-24 15:44:53.792682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:32.295 [2024-07-24 15:44:53.792704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-07-24 15:44:53.792740] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:32.295 [2024-07-24 15:44:53.796157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-07-24 15:44:53.796194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:32.295 [2024-07-24 15:44:53.796212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.394 ms 00:19:32.295 [2024-07-24 15:44:53.796223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-07-24 15:44:53.797519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-07-24 15:44:53.797569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:32.295 [2024-07-24 15:44:53.797588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:19:32.295 [2024-07-24 15:44:53.797605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-07-24 15:44:53.801652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-07-24 15:44:53.801714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:32.295 [2024-07-24 15:44:53.801733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.013 ms 00:19:32.295 [2024-07-24 15:44:53.801746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-07-24 15:44:53.809475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-07-24 15:44:53.809517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:32.295 [2024-07-24 15:44:53.809534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.682 ms 00:19:32.295 [2024-07-24 15:44:53.809547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-07-24 15:44:53.840905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-07-24 15:44:53.840961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:32.295 [2024-07-24 15:44:53.840980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.263 ms 00:19:32.295 [2024-07-24 15:44:53.840992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-07-24 15:44:53.859663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-07-24 15:44:53.859734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:32.295 [2024-07-24 15:44:53.859764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.554 ms 00:19:32.295 [2024-07-24 15:44:53.859777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-07-24 15:44:53.860018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-07-24 15:44:53.860042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:32.295 [2024-07-24 15:44:53.860057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:19:32.295 [2024-07-24 15:44:53.860069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.295 [2024-07-24 15:44:53.891884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.295 [2024-07-24 15:44:53.891944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:32.295 [2024-07-24 15:44:53.891981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.762 ms 00:19:32.295 [2024-07-24 15:44:53.891994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.555 [2024-07-24 15:44:53.923853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.555 [2024-07-24 15:44:53.923921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:32.555 [2024-07-24 15:44:53.923941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.764 ms 00:19:32.555 [2024-07-24 15:44:53.923953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.555 [2024-07-24 15:44:53.955775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.555 [2024-07-24 15:44:53.955843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:32.555 [2024-07-24 15:44:53.955865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.719 ms 00:19:32.555 [2024-07-24 15:44:53.955877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.555 [2024-07-24 15:44:53.988167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.555 [2024-07-24 15:44:53.988231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:32.555 [2024-07-24 15:44:53.988255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.144 ms 00:19:32.555 [2024-07-24 15:44:53.988273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.555 [2024-07-24 15:44:53.988375] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:32.555 [2024-07-24 15:44:53.988405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.988993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:32.555 [2024-07-24 15:44:53.989343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:32.556 [2024-07-24 15:44:53.989721] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:32.556 [2024-07-24 15:44:53.989771] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 610a91da-4184-4e70-9b70-5012476db97f 00:19:32.556 [2024-07-24 15:44:53.989786] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:32.556 [2024-07-24 15:44:53.989797] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:32.556 [2024-07-24 15:44:53.989808] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:32.556 [2024-07-24 15:44:53.989820] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:32.556 [2024-07-24 15:44:53.989831] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:32.556 [2024-07-24 15:44:53.989843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:32.556 [2024-07-24 15:44:53.989854] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:32.556 [2024-07-24 15:44:53.989865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:32.556 [2024-07-24 15:44:53.989875] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:32.556 [2024-07-24 15:44:53.989887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.556 [2024-07-24 15:44:53.989911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:32.556 [2024-07-24 15:44:53.989929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.514 ms 00:19:32.556 [2024-07-24 15:44:53.989941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.556 [2024-07-24 15:44:54.007005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.556 [2024-07-24 15:44:54.007053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:32.556 [2024-07-24 15:44:54.007073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.031 ms 00:19:32.556 [2024-07-24 15:44:54.007106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.556 [2024-07-24 15:44:54.007430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.556 [2024-07-24 15:44:54.007457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:32.556 [2024-07-24 15:44:54.007481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:19:32.556 [2024-07-24 15:44:54.007500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.556 [2024-07-24 15:44:54.057997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.556 [2024-07-24 15:44:54.058066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:32.556 [2024-07-24 15:44:54.058104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.556 [2024-07-24 15:44:54.058120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.556 [2024-07-24 15:44:54.058271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.556 [2024-07-24 15:44:54.058291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:32.556 [2024-07-24 15:44:54.058304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.556 [2024-07-24 15:44:54.058316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.556 [2024-07-24 15:44:54.058391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.556 [2024-07-24 15:44:54.058419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:32.556 [2024-07-24 15:44:54.058434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.556 [2024-07-24 15:44:54.058445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.556 [2024-07-24 15:44:54.058479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.556 [2024-07-24 15:44:54.058494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:32.556 [2024-07-24 15:44:54.058506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.556 [2024-07-24 15:44:54.058517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.814 [2024-07-24 15:44:54.160620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.814 [2024-07-24 15:44:54.160693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:32.814 [2024-07-24 15:44:54.160714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.814 [2024-07-24 15:44:54.160726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.814 [2024-07-24 15:44:54.202327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.814 [2024-07-24 15:44:54.202404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:32.814 [2024-07-24 15:44:54.202426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.814 [2024-07-24 15:44:54.202439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.814 [2024-07-24 15:44:54.202554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.814 [2024-07-24 15:44:54.202574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:32.814 [2024-07-24 15:44:54.202587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.814 [2024-07-24 15:44:54.202599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.814 [2024-07-24 15:44:54.202637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.814 [2024-07-24 15:44:54.202665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:32.814 [2024-07-24 15:44:54.202677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.814 [2024-07-24 15:44:54.202689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.814 [2024-07-24 15:44:54.202820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.814 [2024-07-24 15:44:54.202842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:32.814 [2024-07-24 15:44:54.202855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.814 [2024-07-24 15:44:54.202866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.814 [2024-07-24 15:44:54.202930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.814 [2024-07-24 15:44:54.202965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:32.814 [2024-07-24 15:44:54.202999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.814 [2024-07-24 15:44:54.203018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.814 [2024-07-24 15:44:54.203080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.814 [2024-07-24 15:44:54.203131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:32.814 [2024-07-24 15:44:54.203154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.814 [2024-07-24 15:44:54.203166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.814 [2024-07-24 15:44:54.203226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.814 [2024-07-24 15:44:54.203244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:32.814 [2024-07-24 15:44:54.203264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.814 [2024-07-24 15:44:54.203280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.814 [2024-07-24 15:44:54.203473] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 410.897 ms, result 0 00:19:33.749 00:19:33.749 00:19:33.749 15:44:55 -- ftl/trim.sh@93 -- # svcpid=73448 00:19:33.749 15:44:55 -- ftl/trim.sh@94 -- # waitforlisten 73448 00:19:33.749 15:44:55 -- common/autotest_common.sh@819 -- # '[' -z 73448 ']' 00:19:33.749 15:44:55 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:33.749 15:44:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.749 15:44:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:33.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.749 15:44:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.749 15:44:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:33.749 15:44:55 -- common/autotest_common.sh@10 -- # set +x 00:19:34.007 [2024-07-24 15:44:55.451732] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:34.007 [2024-07-24 15:44:55.451896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73448 ] 00:19:34.265 [2024-07-24 15:44:55.618842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.265 [2024-07-24 15:44:55.807077] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:34.265 [2024-07-24 15:44:55.807348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.643 15:44:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:35.643 15:44:57 -- common/autotest_common.sh@852 -- # return 0 00:19:35.643 15:44:57 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:35.900 [2024-07-24 15:44:57.368485] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:35.900 [2024-07-24 15:44:57.368570] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:36.159 [2024-07-24 15:44:57.517352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.159 [2024-07-24 15:44:57.517427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:36.159 [2024-07-24 15:44:57.517453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:36.159 [2024-07-24 15:44:57.517467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.159 [2024-07-24 15:44:57.520980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.159 [2024-07-24 15:44:57.521056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:36.159 [2024-07-24 15:44:57.521115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.478 ms 00:19:36.159 [2024-07-24 15:44:57.521137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.159 [2024-07-24 15:44:57.521553] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:36.159 [2024-07-24 15:44:57.522619] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:36.159 [2024-07-24 15:44:57.522668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.159 [2024-07-24 15:44:57.522684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:36.159 [2024-07-24 15:44:57.522701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:19:36.159 [2024-07-24 15:44:57.522713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.159 [2024-07-24 15:44:57.524049] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:36.159 [2024-07-24 15:44:57.541041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.159 [2024-07-24 15:44:57.541148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:36.159 [2024-07-24 15:44:57.541172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.996 ms 00:19:36.159 [2024-07-24 15:44:57.541186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.159 [2024-07-24 15:44:57.541346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.159 [2024-07-24 15:44:57.541373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:36.159 [2024-07-24 15:44:57.541388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:36.159 [2024-07-24 15:44:57.541401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.159 [2024-07-24 15:44:57.545968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.159 [2024-07-24 15:44:57.546025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:36.159 [2024-07-24 15:44:57.546044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.497 ms 00:19:36.159 [2024-07-24 15:44:57.546062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.159 [2024-07-24 15:44:57.546215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.159 [2024-07-24 15:44:57.546241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:36.159 [2024-07-24 15:44:57.546256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:36.159 [2024-07-24 15:44:57.546270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.159 [2024-07-24 15:44:57.546308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.159 [2024-07-24 15:44:57.546332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:36.159 [2024-07-24 15:44:57.546345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:36.159 [2024-07-24 15:44:57.546358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.159 [2024-07-24 15:44:57.546399] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:36.159 [2024-07-24 15:44:57.551778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.159 [2024-07-24 15:44:57.551827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:36.159 [2024-07-24 15:44:57.551849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.389 ms 00:19:36.159 [2024-07-24 15:44:57.551861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.159 [2024-07-24 15:44:57.551974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.159 [2024-07-24 15:44:57.551995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:36.159 [2024-07-24 15:44:57.552011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:36.159 [2024-07-24 15:44:57.552023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.159 [2024-07-24 15:44:57.552058] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:36.159 [2024-07-24 15:44:57.552107] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:36.159 [2024-07-24 15:44:57.552157] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:36.159 [2024-07-24 15:44:57.552180] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:36.159 [2024-07-24 15:44:57.552284] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:36.159 [2024-07-24 15:44:57.552306] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:36.159 [2024-07-24 15:44:57.552324] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:36.159 [2024-07-24 15:44:57.552340] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:36.159 [2024-07-24 15:44:57.552358] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:36.159 [2024-07-24 15:44:57.552372] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:36.159 [2024-07-24 15:44:57.552386] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:36.159 [2024-07-24 15:44:57.552397] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:36.159 [2024-07-24 15:44:57.552413] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:36.159 [2024-07-24 15:44:57.552425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.159 [2024-07-24 15:44:57.552439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:36.159 [2024-07-24 15:44:57.552452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:19:36.159 [2024-07-24 15:44:57.552465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.159 [2024-07-24 15:44:57.552547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.160 [2024-07-24 15:44:57.552565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:36.160 [2024-07-24 15:44:57.552580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:36.160 [2024-07-24 15:44:57.552593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.160 [2024-07-24 15:44:57.552683] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:36.160 [2024-07-24 15:44:57.552702] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:36.160 [2024-07-24 15:44:57.552715] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:36.160 [2024-07-24 15:44:57.552729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.160 [2024-07-24 15:44:57.552742] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:36.160 [2024-07-24 15:44:57.552756] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:36.160 [2024-07-24 15:44:57.552768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:36.160 [2024-07-24 15:44:57.552784] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:36.160 [2024-07-24 15:44:57.552796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:36.160 [2024-07-24 15:44:57.552809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:36.160 [2024-07-24 15:44:57.552821] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:36.160 [2024-07-24 15:44:57.552833] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:36.160 [2024-07-24 15:44:57.552844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:36.160 [2024-07-24 15:44:57.552857] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:36.160 [2024-07-24 15:44:57.552869] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:36.160 [2024-07-24 15:44:57.552881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.160 [2024-07-24 15:44:57.552892] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:36.160 [2024-07-24 15:44:57.552905] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:36.160 [2024-07-24 15:44:57.552916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.160 [2024-07-24 15:44:57.552929] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:36.160 [2024-07-24 15:44:57.552940] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:36.160 [2024-07-24 15:44:57.552953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:36.160 [2024-07-24 15:44:57.552964] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:36.160 [2024-07-24 15:44:57.552980] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:36.160 [2024-07-24 15:44:57.552991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:36.160 [2024-07-24 15:44:57.553004] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:36.160 [2024-07-24 15:44:57.553015] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:36.160 [2024-07-24 15:44:57.553028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:36.160 [2024-07-24 15:44:57.553039] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:36.160 [2024-07-24 15:44:57.553052] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:36.160 [2024-07-24 15:44:57.553076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:36.160 [2024-07-24 15:44:57.553353] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:36.160 [2024-07-24 15:44:57.553443] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:36.160 [2024-07-24 15:44:57.553645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:36.160 [2024-07-24 15:44:57.553827] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:36.160 [2024-07-24 15:44:57.553999] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:36.160 [2024-07-24 15:44:57.554160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:36.160 [2024-07-24 15:44:57.554311] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:36.160 [2024-07-24 15:44:57.554472] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:36.160 [2024-07-24 15:44:57.554637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:36.160 [2024-07-24 15:44:57.554779] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:36.160 [2024-07-24 15:44:57.554933] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:36.160 [2024-07-24 15:44:57.554993] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:36.160 [2024-07-24 15:44:57.555158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.160 [2024-07-24 15:44:57.555187] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:36.160 [2024-07-24 15:44:57.555215] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:36.160 [2024-07-24 15:44:57.555235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:36.160 [2024-07-24 15:44:57.555250] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:36.160 [2024-07-24 15:44:57.555262] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:36.160 [2024-07-24 15:44:57.555275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:36.160 [2024-07-24 15:44:57.555289] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:36.160 [2024-07-24 15:44:57.555307] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:36.160 [2024-07-24 15:44:57.555321] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:36.160 [2024-07-24 15:44:57.555336] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:36.160 [2024-07-24 15:44:57.555349] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:36.160 [2024-07-24 15:44:57.555367] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:36.160 [2024-07-24 15:44:57.555381] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:36.160 [2024-07-24 15:44:57.555395] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:36.160 [2024-07-24 15:44:57.555408] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:36.160 [2024-07-24 15:44:57.555422] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:36.160 [2024-07-24 15:44:57.555434] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:36.160 [2024-07-24 15:44:57.555449] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:36.160 [2024-07-24 15:44:57.555461] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:36.160 [2024-07-24 15:44:57.555475] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:36.160 [2024-07-24 15:44:57.555489] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:36.160 [2024-07-24 15:44:57.555503] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:36.160 [2024-07-24 15:44:57.555517] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:36.160 [2024-07-24 15:44:57.555533] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:36.160 [2024-07-24 15:44:57.555546] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:36.160 [2024-07-24 15:44:57.555561] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:36.160 [2024-07-24 15:44:57.555573] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:36.160 [2024-07-24 15:44:57.555593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.160 [2024-07-24 15:44:57.555606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:36.160 [2024-07-24 15:44:57.555621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.950 ms 00:19:36.160 [2024-07-24 15:44:57.555633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.160 [2024-07-24 15:44:57.581339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.160 [2024-07-24 15:44:57.581420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:36.160 [2024-07-24 15:44:57.581456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.624 ms 00:19:36.160 [2024-07-24 15:44:57.581475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.160 [2024-07-24 15:44:57.581727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.160 [2024-07-24 15:44:57.581762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:36.160 [2024-07-24 15:44:57.581787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:36.160 [2024-07-24 15:44:57.581805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.160 [2024-07-24 15:44:57.638640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.160 [2024-07-24 15:44:57.638731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:36.160 [2024-07-24 15:44:57.638767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.782 ms 00:19:36.160 [2024-07-24 15:44:57.638786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.160 [2024-07-24 15:44:57.638969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.160 [2024-07-24 15:44:57.638999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:36.160 [2024-07-24 15:44:57.639022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:36.160 [2024-07-24 15:44:57.639044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.160 [2024-07-24 15:44:57.639495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.160 [2024-07-24 15:44:57.639546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:36.160 [2024-07-24 15:44:57.639579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:19:36.160 [2024-07-24 15:44:57.639597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.160 [2024-07-24 15:44:57.639810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.160 [2024-07-24 15:44:57.639845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:36.161 [2024-07-24 15:44:57.639871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:19:36.161 [2024-07-24 15:44:57.639900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.161 [2024-07-24 15:44:57.665625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.161 [2024-07-24 15:44:57.665703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:36.161 [2024-07-24 15:44:57.665739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.668 ms 00:19:36.161 [2024-07-24 15:44:57.665759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.161 [2024-07-24 15:44:57.690046] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:36.161 [2024-07-24 15:44:57.690141] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:36.161 [2024-07-24 15:44:57.690176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.161 [2024-07-24 15:44:57.690199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:36.161 [2024-07-24 15:44:57.690228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.185 ms 00:19:36.161 [2024-07-24 15:44:57.690246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.161 [2024-07-24 15:44:57.730595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.161 [2024-07-24 15:44:57.730695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:36.161 [2024-07-24 15:44:57.730738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.176 ms 00:19:36.161 [2024-07-24 15:44:57.730759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.161 [2024-07-24 15:44:57.754514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.161 [2024-07-24 15:44:57.754602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:36.161 [2024-07-24 15:44:57.754639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.541 ms 00:19:36.161 [2024-07-24 15:44:57.754661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.419 [2024-07-24 15:44:57.778297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.419 [2024-07-24 15:44:57.778397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:36.419 [2024-07-24 15:44:57.778439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.454 ms 00:19:36.419 [2024-07-24 15:44:57.778458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.419 [2024-07-24 15:44:57.779253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.419 [2024-07-24 15:44:57.779309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:36.419 [2024-07-24 15:44:57.779338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:19:36.419 [2024-07-24 15:44:57.779357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.419 [2024-07-24 15:44:57.866798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.420 [2024-07-24 15:44:57.866875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:36.420 [2024-07-24 15:44:57.866901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.378 ms 00:19:36.420 [2024-07-24 15:44:57.866927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.420 [2024-07-24 15:44:57.880047] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:36.420 [2024-07-24 15:44:57.894554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.420 [2024-07-24 15:44:57.894642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:36.420 [2024-07-24 15:44:57.894666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.469 ms 00:19:36.420 [2024-07-24 15:44:57.894681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.420 [2024-07-24 15:44:57.894822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.420 [2024-07-24 15:44:57.894850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:36.420 [2024-07-24 15:44:57.894866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:36.420 [2024-07-24 15:44:57.894881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.420 [2024-07-24 15:44:57.894958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.420 [2024-07-24 15:44:57.894980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:36.420 [2024-07-24 15:44:57.894994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:36.420 [2024-07-24 15:44:57.895008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.420 [2024-07-24 15:44:57.896972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.420 [2024-07-24 15:44:57.897020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:36.420 [2024-07-24 15:44:57.897038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.931 ms 00:19:36.420 [2024-07-24 15:44:57.897052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.420 [2024-07-24 15:44:57.897118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.420 [2024-07-24 15:44:57.897144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:36.420 [2024-07-24 15:44:57.897157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:36.420 [2024-07-24 15:44:57.897175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.420 [2024-07-24 15:44:57.897223] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:36.420 [2024-07-24 15:44:57.897245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.420 [2024-07-24 15:44:57.897258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:36.420 [2024-07-24 15:44:57.897273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:36.420 [2024-07-24 15:44:57.897284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.420 [2024-07-24 15:44:57.929603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.420 [2024-07-24 15:44:57.929677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:36.420 [2024-07-24 15:44:57.929702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.276 ms 00:19:36.420 [2024-07-24 15:44:57.929715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.420 [2024-07-24 15:44:57.929858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.420 [2024-07-24 15:44:57.929881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:36.420 [2024-07-24 15:44:57.929897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:36.420 [2024-07-24 15:44:57.929910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.420 [2024-07-24 15:44:57.930958] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:36.420 [2024-07-24 15:44:57.935301] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 413.271 ms, result 0 00:19:36.420 [2024-07-24 15:44:57.936829] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:36.420 Some configs were skipped because the RPC state that can call them passed over. 00:19:36.420 15:44:57 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:36.678 [2024-07-24 15:44:58.231687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.678 [2024-07-24 15:44:58.231788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:19:36.678 [2024-07-24 15:44:58.231820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.566 ms 00:19:36.678 [2024-07-24 15:44:58.231837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.678 [2024-07-24 15:44:58.231899] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 33.800 ms, result 0 00:19:36.678 true 00:19:36.678 15:44:58 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:36.937 [2024-07-24 15:44:58.519987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.937 [2024-07-24 15:44:58.520132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:19:36.937 [2024-07-24 15:44:58.520164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.271 ms 00:19:36.937 [2024-07-24 15:44:58.520177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.937 [2024-07-24 15:44:58.520241] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 36.537 ms, result 0 00:19:36.937 true 00:19:37.196 15:44:58 -- ftl/trim.sh@102 -- # killprocess 73448 00:19:37.196 15:44:58 -- common/autotest_common.sh@926 -- # '[' -z 73448 ']' 00:19:37.196 15:44:58 -- common/autotest_common.sh@930 -- # kill -0 73448 00:19:37.196 15:44:58 -- common/autotest_common.sh@931 -- # uname 00:19:37.196 15:44:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:37.196 15:44:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73448 00:19:37.196 killing process with pid 73448 00:19:37.196 15:44:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:37.196 15:44:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:37.196 15:44:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73448' 00:19:37.196 15:44:58 -- common/autotest_common.sh@945 -- # kill 73448 00:19:37.196 15:44:58 -- common/autotest_common.sh@950 -- # wait 73448 00:19:38.132 [2024-07-24 15:44:59.539543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.132 [2024-07-24 15:44:59.539625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:38.132 [2024-07-24 15:44:59.539649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:38.132 [2024-07-24 15:44:59.539663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.132 [2024-07-24 15:44:59.539696] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:38.132 [2024-07-24 15:44:59.543082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.132 [2024-07-24 15:44:59.543135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:38.132 [2024-07-24 15:44:59.543158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.357 ms 00:19:38.132 [2024-07-24 15:44:59.543171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.132 [2024-07-24 15:44:59.543517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.132 [2024-07-24 15:44:59.543557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:38.132 [2024-07-24 15:44:59.543577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:19:38.132 [2024-07-24 15:44:59.543590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.132 [2024-07-24 15:44:59.547735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.132 [2024-07-24 15:44:59.547781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:38.132 [2024-07-24 15:44:59.547802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.107 ms 00:19:38.132 [2024-07-24 15:44:59.547818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.132 [2024-07-24 15:44:59.555576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.132 [2024-07-24 15:44:59.555618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:38.132 [2024-07-24 15:44:59.555639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.701 ms 00:19:38.132 [2024-07-24 15:44:59.555651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.132 [2024-07-24 15:44:59.569076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.132 [2024-07-24 15:44:59.569156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:38.132 [2024-07-24 15:44:59.569199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.347 ms 00:19:38.132 [2024-07-24 15:44:59.569212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.132 [2024-07-24 15:44:59.578249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.132 [2024-07-24 15:44:59.578328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:38.132 [2024-07-24 15:44:59.578355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.934 ms 00:19:38.132 [2024-07-24 15:44:59.578368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.132 [2024-07-24 15:44:59.578549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.132 [2024-07-24 15:44:59.578570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:38.132 [2024-07-24 15:44:59.578587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:19:38.132 [2024-07-24 15:44:59.578599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.132 [2024-07-24 15:44:59.591652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.132 [2024-07-24 15:44:59.591699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:38.132 [2024-07-24 15:44:59.591721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.021 ms 00:19:38.132 [2024-07-24 15:44:59.591734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.132 [2024-07-24 15:44:59.604405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.132 [2024-07-24 15:44:59.604457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:38.133 [2024-07-24 15:44:59.604485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.594 ms 00:19:38.133 [2024-07-24 15:44:59.604498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.133 [2024-07-24 15:44:59.617588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.133 [2024-07-24 15:44:59.617653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:38.133 [2024-07-24 15:44:59.617677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.013 ms 00:19:38.133 [2024-07-24 15:44:59.617689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.133 [2024-07-24 15:44:59.630613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.133 [2024-07-24 15:44:59.630665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:38.133 [2024-07-24 15:44:59.630687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.813 ms 00:19:38.133 [2024-07-24 15:44:59.630699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.133 [2024-07-24 15:44:59.630790] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:38.133 [2024-07-24 15:44:59.630819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.630837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.630850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.630865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.630878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.630895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.630908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.630941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.630956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.630970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.630983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.630997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:38.133 [2024-07-24 15:44:59.631424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.631991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.632005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.632017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.632035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.632047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.632063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.632075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.632102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.632117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:38.134 [2024-07-24 15:44:59.632131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:38.135 [2024-07-24 15:44:59.632143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:38.135 [2024-07-24 15:44:59.632159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:38.135 [2024-07-24 15:44:59.632172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:38.135 [2024-07-24 15:44:59.632186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:38.135 [2024-07-24 15:44:59.632199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:38.135 [2024-07-24 15:44:59.632213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:38.135 [2024-07-24 15:44:59.632225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:38.135 [2024-07-24 15:44:59.632239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:38.135 [2024-07-24 15:44:59.632260] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:38.135 [2024-07-24 15:44:59.632292] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 610a91da-4184-4e70-9b70-5012476db97f 00:19:38.135 [2024-07-24 15:44:59.632308] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:38.135 [2024-07-24 15:44:59.632322] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:38.135 [2024-07-24 15:44:59.632334] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:38.135 [2024-07-24 15:44:59.632349] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:38.135 [2024-07-24 15:44:59.632360] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:38.135 [2024-07-24 15:44:59.632374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:38.135 [2024-07-24 15:44:59.632386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:38.135 [2024-07-24 15:44:59.632398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:38.135 [2024-07-24 15:44:59.632409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:38.135 [2024-07-24 15:44:59.632423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.135 [2024-07-24 15:44:59.632435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:38.135 [2024-07-24 15:44:59.632450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.639 ms 00:19:38.135 [2024-07-24 15:44:59.632462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.135 [2024-07-24 15:44:59.649047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.135 [2024-07-24 15:44:59.649114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:38.135 [2024-07-24 15:44:59.649141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.549 ms 00:19:38.135 [2024-07-24 15:44:59.649154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.135 [2024-07-24 15:44:59.649440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.135 [2024-07-24 15:44:59.649473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:38.135 [2024-07-24 15:44:59.649492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:19:38.135 [2024-07-24 15:44:59.649504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.135 [2024-07-24 15:44:59.710096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.135 [2024-07-24 15:44:59.710171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:38.135 [2024-07-24 15:44:59.710196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.135 [2024-07-24 15:44:59.710209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.135 [2024-07-24 15:44:59.710355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.135 [2024-07-24 15:44:59.710375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:38.135 [2024-07-24 15:44:59.710390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.135 [2024-07-24 15:44:59.710402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.135 [2024-07-24 15:44:59.710480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.135 [2024-07-24 15:44:59.710500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:38.135 [2024-07-24 15:44:59.710518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.135 [2024-07-24 15:44:59.710530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.135 [2024-07-24 15:44:59.710561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.135 [2024-07-24 15:44:59.710575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:38.135 [2024-07-24 15:44:59.710589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.135 [2024-07-24 15:44:59.710600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.393 [2024-07-24 15:44:59.819752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.393 [2024-07-24 15:44:59.819826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:38.393 [2024-07-24 15:44:59.819852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.393 [2024-07-24 15:44:59.819865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.393 [2024-07-24 15:44:59.861299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.393 [2024-07-24 15:44:59.861384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:38.393 [2024-07-24 15:44:59.861409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.393 [2024-07-24 15:44:59.861423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.393 [2024-07-24 15:44:59.861549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.393 [2024-07-24 15:44:59.861569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.393 [2024-07-24 15:44:59.861588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.393 [2024-07-24 15:44:59.861600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.393 [2024-07-24 15:44:59.861643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.393 [2024-07-24 15:44:59.861658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.393 [2024-07-24 15:44:59.861672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.393 [2024-07-24 15:44:59.861684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.393 [2024-07-24 15:44:59.861819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.393 [2024-07-24 15:44:59.861843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.393 [2024-07-24 15:44:59.861859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.393 [2024-07-24 15:44:59.861870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.393 [2024-07-24 15:44:59.861934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.393 [2024-07-24 15:44:59.861954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:38.393 [2024-07-24 15:44:59.861969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.393 [2024-07-24 15:44:59.861981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.393 [2024-07-24 15:44:59.862033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.393 [2024-07-24 15:44:59.862052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.393 [2024-07-24 15:44:59.862069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.393 [2024-07-24 15:44:59.862081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.393 [2024-07-24 15:44:59.862185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.393 [2024-07-24 15:44:59.862203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.393 [2024-07-24 15:44:59.862219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.393 [2024-07-24 15:44:59.862230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.393 [2024-07-24 15:44:59.862403] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 322.833 ms, result 0 00:19:39.764 15:45:01 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:39.764 [2024-07-24 15:45:01.133506] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:39.764 [2024-07-24 15:45:01.133654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73515 ] 00:19:39.764 [2024-07-24 15:45:01.322449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.021 [2024-07-24 15:45:01.596653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.588 [2024-07-24 15:45:01.902708] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:40.588 [2024-07-24 15:45:01.902796] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:40.588 [2024-07-24 15:45:02.058008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.588 [2024-07-24 15:45:02.058077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:40.588 [2024-07-24 15:45:02.058118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:40.588 [2024-07-24 15:45:02.058137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.588 [2024-07-24 15:45:02.061441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.588 [2024-07-24 15:45:02.061496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:40.588 [2024-07-24 15:45:02.061521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.273 ms 00:19:40.588 [2024-07-24 15:45:02.061539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.588 [2024-07-24 15:45:02.061686] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:40.588 [2024-07-24 15:45:02.062648] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:40.588 [2024-07-24 15:45:02.062691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.588 [2024-07-24 15:45:02.062712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:40.588 [2024-07-24 15:45:02.062725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:19:40.588 [2024-07-24 15:45:02.062737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.588 [2024-07-24 15:45:02.063998] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:40.588 [2024-07-24 15:45:02.083161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.588 [2024-07-24 15:45:02.083228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:40.588 [2024-07-24 15:45:02.083259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.163 ms 00:19:40.588 [2024-07-24 15:45:02.083279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.588 [2024-07-24 15:45:02.083442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.588 [2024-07-24 15:45:02.083474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:40.588 [2024-07-24 15:45:02.083505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:40.588 [2024-07-24 15:45:02.083525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.588 [2024-07-24 15:45:02.088376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.588 [2024-07-24 15:45:02.088433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:40.588 [2024-07-24 15:45:02.088453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.761 ms 00:19:40.588 [2024-07-24 15:45:02.088473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.588 [2024-07-24 15:45:02.088641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.588 [2024-07-24 15:45:02.088667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:40.588 [2024-07-24 15:45:02.088681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:40.588 [2024-07-24 15:45:02.088694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.588 [2024-07-24 15:45:02.088737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.588 [2024-07-24 15:45:02.088753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:40.588 [2024-07-24 15:45:02.088766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:40.588 [2024-07-24 15:45:02.088777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.588 [2024-07-24 15:45:02.088816] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:40.588 [2024-07-24 15:45:02.093338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.588 [2024-07-24 15:45:02.093389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:40.588 [2024-07-24 15:45:02.093408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.538 ms 00:19:40.588 [2024-07-24 15:45:02.093420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.588 [2024-07-24 15:45:02.093531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.588 [2024-07-24 15:45:02.093559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:40.588 [2024-07-24 15:45:02.093572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:40.588 [2024-07-24 15:45:02.093584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.588 [2024-07-24 15:45:02.093618] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:40.588 [2024-07-24 15:45:02.093647] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:40.588 [2024-07-24 15:45:02.093689] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:40.588 [2024-07-24 15:45:02.093709] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:40.588 [2024-07-24 15:45:02.093795] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:40.588 [2024-07-24 15:45:02.093811] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:40.588 [2024-07-24 15:45:02.093826] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:40.588 [2024-07-24 15:45:02.093841] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:40.588 [2024-07-24 15:45:02.093854] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:40.588 [2024-07-24 15:45:02.093866] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:40.588 [2024-07-24 15:45:02.093878] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:40.588 [2024-07-24 15:45:02.093888] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:40.589 [2024-07-24 15:45:02.093899] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:40.589 [2024-07-24 15:45:02.093921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.589 [2024-07-24 15:45:02.093936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:40.589 [2024-07-24 15:45:02.093949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:19:40.589 [2024-07-24 15:45:02.093960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.589 [2024-07-24 15:45:02.094040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.589 [2024-07-24 15:45:02.094056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:40.589 [2024-07-24 15:45:02.094067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:40.589 [2024-07-24 15:45:02.094079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.589 [2024-07-24 15:45:02.094189] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:40.589 [2024-07-24 15:45:02.094207] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:40.589 [2024-07-24 15:45:02.094219] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:40.589 [2024-07-24 15:45:02.094236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.589 [2024-07-24 15:45:02.094248] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:40.589 [2024-07-24 15:45:02.094259] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:40.589 [2024-07-24 15:45:02.094270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:40.589 [2024-07-24 15:45:02.094282] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:40.589 [2024-07-24 15:45:02.094293] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:40.589 [2024-07-24 15:45:02.094304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:40.589 [2024-07-24 15:45:02.094315] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:40.589 [2024-07-24 15:45:02.094325] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:40.589 [2024-07-24 15:45:02.094336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:40.589 [2024-07-24 15:45:02.094347] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:40.589 [2024-07-24 15:45:02.094358] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:40.589 [2024-07-24 15:45:02.094369] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.589 [2024-07-24 15:45:02.094380] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:40.589 [2024-07-24 15:45:02.094390] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:40.589 [2024-07-24 15:45:02.094401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.589 [2024-07-24 15:45:02.094424] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:40.589 [2024-07-24 15:45:02.094435] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:40.589 [2024-07-24 15:45:02.094446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:40.589 [2024-07-24 15:45:02.094457] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:40.589 [2024-07-24 15:45:02.094476] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:40.589 [2024-07-24 15:45:02.094495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:40.589 [2024-07-24 15:45:02.094514] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:40.589 [2024-07-24 15:45:02.094526] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:40.589 [2024-07-24 15:45:02.094536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:40.589 [2024-07-24 15:45:02.094546] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:40.589 [2024-07-24 15:45:02.094557] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:40.589 [2024-07-24 15:45:02.094568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:40.589 [2024-07-24 15:45:02.094578] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:40.589 [2024-07-24 15:45:02.094588] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:40.589 [2024-07-24 15:45:02.094599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:40.589 [2024-07-24 15:45:02.094610] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:40.589 [2024-07-24 15:45:02.094628] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:40.589 [2024-07-24 15:45:02.094649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:40.589 [2024-07-24 15:45:02.094671] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:40.589 [2024-07-24 15:45:02.094693] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:40.589 [2024-07-24 15:45:02.094714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:40.589 [2024-07-24 15:45:02.094734] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:40.589 [2024-07-24 15:45:02.094756] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:40.589 [2024-07-24 15:45:02.094776] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:40.589 [2024-07-24 15:45:02.094797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.589 [2024-07-24 15:45:02.094818] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:40.589 [2024-07-24 15:45:02.094837] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:40.589 [2024-07-24 15:45:02.094858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:40.589 [2024-07-24 15:45:02.094878] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:40.589 [2024-07-24 15:45:02.094898] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:40.589 [2024-07-24 15:45:02.094936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:40.589 [2024-07-24 15:45:02.094964] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:40.589 [2024-07-24 15:45:02.095003] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:40.589 [2024-07-24 15:45:02.095027] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:40.589 [2024-07-24 15:45:02.095051] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:40.589 [2024-07-24 15:45:02.095070] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:40.589 [2024-07-24 15:45:02.095537] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:40.589 [2024-07-24 15:45:02.095659] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:40.589 [2024-07-24 15:45:02.095999] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:40.589 [2024-07-24 15:45:02.096319] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:40.589 [2024-07-24 15:45:02.096546] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:40.589 [2024-07-24 15:45:02.096777] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:40.589 [2024-07-24 15:45:02.096947] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:40.589 [2024-07-24 15:45:02.097155] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:40.589 [2024-07-24 15:45:02.097353] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:40.589 [2024-07-24 15:45:02.097599] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:40.589 [2024-07-24 15:45:02.097622] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:40.589 [2024-07-24 15:45:02.097637] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:40.589 [2024-07-24 15:45:02.097650] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:40.589 [2024-07-24 15:45:02.097662] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:40.589 [2024-07-24 15:45:02.097674] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:40.589 [2024-07-24 15:45:02.097686] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:40.589 [2024-07-24 15:45:02.097700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.589 [2024-07-24 15:45:02.097722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:40.589 [2024-07-24 15:45:02.097735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.556 ms 00:19:40.589 [2024-07-24 15:45:02.097747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.589 [2024-07-24 15:45:02.117907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.589 [2024-07-24 15:45:02.117992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:40.589 [2024-07-24 15:45:02.118029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.066 ms 00:19:40.589 [2024-07-24 15:45:02.118049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.589 [2024-07-24 15:45:02.118325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.589 [2024-07-24 15:45:02.118352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:40.589 [2024-07-24 15:45:02.118366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:40.589 [2024-07-24 15:45:02.118378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.848 [2024-07-24 15:45:02.192433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.848 [2024-07-24 15:45:02.192537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:40.848 [2024-07-24 15:45:02.192570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.014 ms 00:19:40.848 [2024-07-24 15:45:02.192590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.848 [2024-07-24 15:45:02.192809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.848 [2024-07-24 15:45:02.192835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:40.848 [2024-07-24 15:45:02.192857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:40.848 [2024-07-24 15:45:02.192875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.848 [2024-07-24 15:45:02.193375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.848 [2024-07-24 15:45:02.193404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:40.848 [2024-07-24 15:45:02.193425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:19:40.848 [2024-07-24 15:45:02.193442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.848 [2024-07-24 15:45:02.193809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.848 [2024-07-24 15:45:02.193843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:40.848 [2024-07-24 15:45:02.193864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:19:40.848 [2024-07-24 15:45:02.193881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.848 [2024-07-24 15:45:02.218798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.848 [2024-07-24 15:45:02.218889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:40.848 [2024-07-24 15:45:02.218929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.837 ms 00:19:40.848 [2024-07-24 15:45:02.218950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.848 [2024-07-24 15:45:02.242171] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:40.848 [2024-07-24 15:45:02.242371] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:40.848 [2024-07-24 15:45:02.242402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.848 [2024-07-24 15:45:02.242421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:40.848 [2024-07-24 15:45:02.242444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.067 ms 00:19:40.848 [2024-07-24 15:45:02.242463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.848 [2024-07-24 15:45:02.280540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.848 [2024-07-24 15:45:02.280632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:40.848 [2024-07-24 15:45:02.280656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.859 ms 00:19:40.848 [2024-07-24 15:45:02.280682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.848 [2024-07-24 15:45:02.297654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.848 [2024-07-24 15:45:02.297742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:40.848 [2024-07-24 15:45:02.297764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.786 ms 00:19:40.848 [2024-07-24 15:45:02.297777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.848 [2024-07-24 15:45:02.314049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.848 [2024-07-24 15:45:02.314128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:40.848 [2024-07-24 15:45:02.314150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.093 ms 00:19:40.848 [2024-07-24 15:45:02.314162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.848 [2024-07-24 15:45:02.314657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.848 [2024-07-24 15:45:02.314684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:40.848 [2024-07-24 15:45:02.314699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:19:40.848 [2024-07-24 15:45:02.314710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.848 [2024-07-24 15:45:02.392250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.848 [2024-07-24 15:45:02.392320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:40.848 [2024-07-24 15:45:02.392344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.504 ms 00:19:40.848 [2024-07-24 15:45:02.392356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.848 [2024-07-24 15:45:02.405244] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:40.848 [2024-07-24 15:45:02.419435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.848 [2024-07-24 15:45:02.419502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:40.848 [2024-07-24 15:45:02.419525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.914 ms 00:19:40.848 [2024-07-24 15:45:02.419537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.848 [2024-07-24 15:45:02.419675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.848 [2024-07-24 15:45:02.419696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:40.848 [2024-07-24 15:45:02.419710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:40.848 [2024-07-24 15:45:02.419722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.848 [2024-07-24 15:45:02.419791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.848 [2024-07-24 15:45:02.419814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:40.848 [2024-07-24 15:45:02.419827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:40.848 [2024-07-24 15:45:02.419839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.848 [2024-07-24 15:45:02.421747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.848 [2024-07-24 15:45:02.421789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:40.848 [2024-07-24 15:45:02.421805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.876 ms 00:19:40.848 [2024-07-24 15:45:02.421817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.848 [2024-07-24 15:45:02.421857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.848 [2024-07-24 15:45:02.421872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:40.848 [2024-07-24 15:45:02.421885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:40.848 [2024-07-24 15:45:02.421901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.848 [2024-07-24 15:45:02.421942] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:40.848 [2024-07-24 15:45:02.421958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.848 [2024-07-24 15:45:02.421969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:40.848 [2024-07-24 15:45:02.421981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:40.848 [2024-07-24 15:45:02.421992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.106 [2024-07-24 15:45:02.453164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.106 [2024-07-24 15:45:02.453238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:41.106 [2024-07-24 15:45:02.453273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.137 ms 00:19:41.106 [2024-07-24 15:45:02.453287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.106 [2024-07-24 15:45:02.453476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.106 [2024-07-24 15:45:02.453497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:41.106 [2024-07-24 15:45:02.453512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:41.106 [2024-07-24 15:45:02.453523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.106 [2024-07-24 15:45:02.454610] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:41.106 [2024-07-24 15:45:02.459202] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 396.262 ms, result 0 00:19:41.106 [2024-07-24 15:45:02.459957] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:41.106 [2024-07-24 15:45:02.476802] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:51.597  Copying: 22/256 [MB] (22 MBps) Copying: 42/256 [MB] (20 MBps) Copying: 68/256 [MB] (26 MBps) Copying: 92/256 [MB] (24 MBps) Copying: 116/256 [MB] (24 MBps) Copying: 141/256 [MB] (24 MBps) Copying: 166/256 [MB] (25 MBps) Copying: 192/256 [MB] (25 MBps) Copying: 218/256 [MB] (26 MBps) Copying: 244/256 [MB] (26 MBps) Copying: 256/256 [MB] (average 24 MBps)[2024-07-24 15:45:13.181281] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:51.856 [2024-07-24 15:45:13.201464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.856 [2024-07-24 15:45:13.201539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:51.856 [2024-07-24 15:45:13.201577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:51.856 [2024-07-24 15:45:13.201606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.856 [2024-07-24 15:45:13.201650] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:51.856 [2024-07-24 15:45:13.205851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.856 [2024-07-24 15:45:13.205905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:51.856 [2024-07-24 15:45:13.205931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.170 ms 00:19:51.856 [2024-07-24 15:45:13.205945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.856 [2024-07-24 15:45:13.206442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.856 [2024-07-24 15:45:13.206497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:51.856 [2024-07-24 15:45:13.206520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:19:51.856 [2024-07-24 15:45:13.206534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.856 [2024-07-24 15:45:13.211318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.856 [2024-07-24 15:45:13.211374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:51.856 [2024-07-24 15:45:13.211395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.752 ms 00:19:51.856 [2024-07-24 15:45:13.211409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.856 [2024-07-24 15:45:13.220892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.856 [2024-07-24 15:45:13.220947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:51.856 [2024-07-24 15:45:13.220970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.395 ms 00:19:51.856 [2024-07-24 15:45:13.220985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.856 [2024-07-24 15:45:13.261888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.856 [2024-07-24 15:45:13.261953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:51.856 [2024-07-24 15:45:13.261978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.809 ms 00:19:51.856 [2024-07-24 15:45:13.261992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.856 [2024-07-24 15:45:13.286415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.856 [2024-07-24 15:45:13.286484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:51.856 [2024-07-24 15:45:13.286518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.292 ms 00:19:51.856 [2024-07-24 15:45:13.286534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.856 [2024-07-24 15:45:13.286808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.856 [2024-07-24 15:45:13.286842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:51.856 [2024-07-24 15:45:13.286860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:19:51.856 [2024-07-24 15:45:13.286875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.856 [2024-07-24 15:45:13.327060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.856 [2024-07-24 15:45:13.327333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:51.856 [2024-07-24 15:45:13.327592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.152 ms 00:19:51.856 [2024-07-24 15:45:13.327803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.856 [2024-07-24 15:45:13.366842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.856 [2024-07-24 15:45:13.367124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:51.856 [2024-07-24 15:45:13.367354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.687 ms 00:19:51.857 [2024-07-24 15:45:13.367561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.857 [2024-07-24 15:45:13.411731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.857 [2024-07-24 15:45:13.411986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:51.857 [2024-07-24 15:45:13.412146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.853 ms 00:19:51.857 [2024-07-24 15:45:13.412179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.116 [2024-07-24 15:45:13.454335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.116 [2024-07-24 15:45:13.454403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:52.116 [2024-07-24 15:45:13.454428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.905 ms 00:19:52.116 [2024-07-24 15:45:13.454443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.116 [2024-07-24 15:45:13.454536] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:52.116 [2024-07-24 15:45:13.454566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.454999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:52.116 [2024-07-24 15:45:13.455629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.455986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.456000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.456014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.456029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.456044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.456058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.456073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.456103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.456121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.456136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.456151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:52.117 [2024-07-24 15:45:13.456178] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:52.117 [2024-07-24 15:45:13.456228] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 610a91da-4184-4e70-9b70-5012476db97f 00:19:52.117 [2024-07-24 15:45:13.456245] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:52.117 [2024-07-24 15:45:13.456259] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:52.117 [2024-07-24 15:45:13.456273] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:52.117 [2024-07-24 15:45:13.456287] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:52.117 [2024-07-24 15:45:13.456300] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:52.117 [2024-07-24 15:45:13.456314] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:52.117 [2024-07-24 15:45:13.456328] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:52.117 [2024-07-24 15:45:13.456341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:52.117 [2024-07-24 15:45:13.456353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:52.117 [2024-07-24 15:45:13.456367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.117 [2024-07-24 15:45:13.456389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:52.117 [2024-07-24 15:45:13.456404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.834 ms 00:19:52.117 [2024-07-24 15:45:13.456418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.117 [2024-07-24 15:45:13.478148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.117 [2024-07-24 15:45:13.478384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:52.117 [2024-07-24 15:45:13.478536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.681 ms 00:19:52.117 [2024-07-24 15:45:13.478684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.117 [2024-07-24 15:45:13.479164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.117 [2024-07-24 15:45:13.479263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:52.117 [2024-07-24 15:45:13.479457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:19:52.117 [2024-07-24 15:45:13.479533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.117 [2024-07-24 15:45:13.544450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.117 [2024-07-24 15:45:13.544725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:52.117 [2024-07-24 15:45:13.544884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.117 [2024-07-24 15:45:13.545013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.117 [2024-07-24 15:45:13.545264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.117 [2024-07-24 15:45:13.545353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:52.117 [2024-07-24 15:45:13.545560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.117 [2024-07-24 15:45:13.545767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.117 [2024-07-24 15:45:13.546052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.117 [2024-07-24 15:45:13.546245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:52.117 [2024-07-24 15:45:13.546391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.117 [2024-07-24 15:45:13.546419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.117 [2024-07-24 15:45:13.546460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.117 [2024-07-24 15:45:13.546489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:52.117 [2024-07-24 15:45:13.546504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.117 [2024-07-24 15:45:13.546517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.117 [2024-07-24 15:45:13.668367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.117 [2024-07-24 15:45:13.668446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:52.117 [2024-07-24 15:45:13.668471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.117 [2024-07-24 15:45:13.668486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.117 [2024-07-24 15:45:13.711632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.117 [2024-07-24 15:45:13.711704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:52.117 [2024-07-24 15:45:13.711742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.117 [2024-07-24 15:45:13.711761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.117 [2024-07-24 15:45:13.711886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.117 [2024-07-24 15:45:13.711911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:52.117 [2024-07-24 15:45:13.711931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.117 [2024-07-24 15:45:13.711944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.117 [2024-07-24 15:45:13.711997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.117 [2024-07-24 15:45:13.712012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:52.117 [2024-07-24 15:45:13.712035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.117 [2024-07-24 15:45:13.712046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.375 [2024-07-24 15:45:13.712197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.375 [2024-07-24 15:45:13.712219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:52.375 [2024-07-24 15:45:13.712232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.375 [2024-07-24 15:45:13.712243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.375 [2024-07-24 15:45:13.712300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.375 [2024-07-24 15:45:13.712318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:52.375 [2024-07-24 15:45:13.712331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.375 [2024-07-24 15:45:13.712349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.375 [2024-07-24 15:45:13.712397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.375 [2024-07-24 15:45:13.712412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:52.375 [2024-07-24 15:45:13.712424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.375 [2024-07-24 15:45:13.712435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.375 [2024-07-24 15:45:13.712490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.375 [2024-07-24 15:45:13.712506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:52.375 [2024-07-24 15:45:13.712523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.375 [2024-07-24 15:45:13.712538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.375 [2024-07-24 15:45:13.712710] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 511.297 ms, result 0 00:19:53.307 00:19:53.307 00:19:53.307 15:45:14 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:53.872 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:53.872 15:45:15 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:53.872 15:45:15 -- ftl/trim.sh@109 -- # fio_kill 00:19:53.872 15:45:15 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:53.872 15:45:15 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:53.872 15:45:15 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:53.872 15:45:15 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:53.872 Process with pid 73448 is not found 00:19:53.872 15:45:15 -- ftl/trim.sh@20 -- # killprocess 73448 00:19:53.872 15:45:15 -- common/autotest_common.sh@926 -- # '[' -z 73448 ']' 00:19:53.872 15:45:15 -- common/autotest_common.sh@930 -- # kill -0 73448 00:19:53.872 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (73448) - No such process 00:19:53.872 15:45:15 -- common/autotest_common.sh@953 -- # echo 'Process with pid 73448 is not found' 00:19:53.872 ************************************ 00:19:53.872 END TEST ftl_trim 00:19:53.872 ************************************ 00:19:53.872 00:19:53.872 real 1m12.055s 00:19:53.872 user 1m39.968s 00:19:53.872 sys 0m7.018s 00:19:53.872 15:45:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:53.872 15:45:15 -- common/autotest_common.sh@10 -- # set +x 00:19:54.130 15:45:15 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:19:54.130 15:45:15 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:54.130 15:45:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:54.130 15:45:15 -- common/autotest_common.sh@10 -- # set +x 00:19:54.130 ************************************ 00:19:54.130 START TEST ftl_restore 00:19:54.130 ************************************ 00:19:54.130 15:45:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:19:54.130 * Looking for test storage... 00:19:54.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:54.130 15:45:15 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:54.130 15:45:15 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:54.130 15:45:15 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:54.130 15:45:15 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:54.130 15:45:15 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:54.130 15:45:15 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:54.130 15:45:15 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:54.130 15:45:15 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:54.130 15:45:15 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:54.130 15:45:15 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:54.130 15:45:15 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:54.131 15:45:15 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:54.131 15:45:15 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:54.131 15:45:15 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:54.131 15:45:15 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:54.131 15:45:15 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:54.131 15:45:15 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:54.131 15:45:15 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:54.131 15:45:15 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:54.131 15:45:15 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:54.131 15:45:15 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:54.131 15:45:15 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:54.131 15:45:15 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:54.131 15:45:15 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:54.131 15:45:15 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:54.131 15:45:15 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:54.131 15:45:15 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:54.131 15:45:15 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:54.131 15:45:15 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:54.131 15:45:15 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:54.131 15:45:15 -- ftl/restore.sh@13 -- # mktemp -d 00:19:54.131 15:45:15 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.YVSJHVlfUu 00:19:54.131 15:45:15 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:54.131 15:45:15 -- ftl/restore.sh@16 -- # case $opt in 00:19:54.131 15:45:15 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:19:54.131 15:45:15 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:54.131 15:45:15 -- ftl/restore.sh@23 -- # shift 2 00:19:54.131 15:45:15 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:19:54.131 15:45:15 -- ftl/restore.sh@25 -- # timeout=240 00:19:54.131 15:45:15 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:54.131 15:45:15 -- ftl/restore.sh@39 -- # svcpid=73722 00:19:54.131 15:45:15 -- ftl/restore.sh@41 -- # waitforlisten 73722 00:19:54.131 15:45:15 -- common/autotest_common.sh@819 -- # '[' -z 73722 ']' 00:19:54.131 15:45:15 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:54.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.131 15:45:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.131 15:45:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:54.131 15:45:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.131 15:45:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:54.131 15:45:15 -- common/autotest_common.sh@10 -- # set +x 00:19:54.131 [2024-07-24 15:45:15.726893] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:54.388 [2024-07-24 15:45:15.727550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73722 ] 00:19:54.388 [2024-07-24 15:45:15.887170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.646 [2024-07-24 15:45:16.071938] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:54.646 [2024-07-24 15:45:16.072409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.019 15:45:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:56.019 15:45:17 -- common/autotest_common.sh@852 -- # return 0 00:19:56.019 15:45:17 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:19:56.019 15:45:17 -- ftl/common.sh@54 -- # local name=nvme0 00:19:56.019 15:45:17 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:19:56.019 15:45:17 -- ftl/common.sh@56 -- # local size=103424 00:19:56.019 15:45:17 -- ftl/common.sh@59 -- # local base_bdev 00:19:56.019 15:45:17 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:19:56.277 15:45:17 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:56.277 15:45:17 -- ftl/common.sh@62 -- # local base_size 00:19:56.277 15:45:17 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:56.277 15:45:17 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:19:56.277 15:45:17 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:56.277 15:45:17 -- common/autotest_common.sh@1359 -- # local bs 00:19:56.277 15:45:17 -- common/autotest_common.sh@1360 -- # local nb 00:19:56.277 15:45:17 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:56.535 15:45:18 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:56.535 { 00:19:56.535 "name": "nvme0n1", 00:19:56.535 "aliases": [ 00:19:56.535 "1eab503a-5e79-447a-9a07-35e661bcadd6" 00:19:56.535 ], 00:19:56.535 "product_name": "NVMe disk", 00:19:56.535 "block_size": 4096, 00:19:56.535 "num_blocks": 1310720, 00:19:56.535 "uuid": "1eab503a-5e79-447a-9a07-35e661bcadd6", 00:19:56.535 "assigned_rate_limits": { 00:19:56.535 "rw_ios_per_sec": 0, 00:19:56.535 "rw_mbytes_per_sec": 0, 00:19:56.535 "r_mbytes_per_sec": 0, 00:19:56.535 "w_mbytes_per_sec": 0 00:19:56.535 }, 00:19:56.535 "claimed": true, 00:19:56.535 "claim_type": "read_many_write_one", 00:19:56.535 "zoned": false, 00:19:56.535 "supported_io_types": { 00:19:56.535 "read": true, 00:19:56.535 "write": true, 00:19:56.535 "unmap": true, 00:19:56.535 "write_zeroes": true, 00:19:56.535 "flush": true, 00:19:56.535 "reset": true, 00:19:56.535 "compare": true, 00:19:56.535 "compare_and_write": false, 00:19:56.535 "abort": true, 00:19:56.535 "nvme_admin": true, 00:19:56.535 "nvme_io": true 00:19:56.535 }, 00:19:56.535 "driver_specific": { 00:19:56.535 "nvme": [ 00:19:56.535 { 00:19:56.535 "pci_address": "0000:00:07.0", 00:19:56.535 "trid": { 00:19:56.535 "trtype": "PCIe", 00:19:56.535 "traddr": "0000:00:07.0" 00:19:56.535 }, 00:19:56.535 "ctrlr_data": { 00:19:56.535 "cntlid": 0, 00:19:56.535 "vendor_id": "0x1b36", 00:19:56.535 "model_number": "QEMU NVMe Ctrl", 00:19:56.535 "serial_number": "12341", 00:19:56.535 "firmware_revision": "8.0.0", 00:19:56.535 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:56.535 "oacs": { 00:19:56.535 "security": 0, 00:19:56.535 "format": 1, 00:19:56.535 "firmware": 0, 00:19:56.535 "ns_manage": 1 00:19:56.535 }, 00:19:56.535 "multi_ctrlr": false, 00:19:56.535 "ana_reporting": false 00:19:56.535 }, 00:19:56.535 "vs": { 00:19:56.535 "nvme_version": "1.4" 00:19:56.535 }, 00:19:56.535 "ns_data": { 00:19:56.535 "id": 1, 00:19:56.535 "can_share": false 00:19:56.535 } 00:19:56.535 } 00:19:56.535 ], 00:19:56.535 "mp_policy": "active_passive" 00:19:56.535 } 00:19:56.535 } 00:19:56.535 ]' 00:19:56.535 15:45:18 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:56.793 15:45:18 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:56.793 15:45:18 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:56.793 15:45:18 -- common/autotest_common.sh@1363 -- # nb=1310720 00:19:56.793 15:45:18 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:19:56.793 15:45:18 -- common/autotest_common.sh@1367 -- # echo 5120 00:19:56.793 15:45:18 -- ftl/common.sh@63 -- # base_size=5120 00:19:56.793 15:45:18 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:56.793 15:45:18 -- ftl/common.sh@67 -- # clear_lvols 00:19:56.793 15:45:18 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:56.793 15:45:18 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:57.051 15:45:18 -- ftl/common.sh@28 -- # stores=999cf7d9-32e4-4432-b6dc-a2c36408958b 00:19:57.051 15:45:18 -- ftl/common.sh@29 -- # for lvs in $stores 00:19:57.051 15:45:18 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 999cf7d9-32e4-4432-b6dc-a2c36408958b 00:19:57.308 15:45:18 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:57.565 15:45:19 -- ftl/common.sh@68 -- # lvs=7f53684c-d94e-4261-b991-e90de349ea44 00:19:57.565 15:45:19 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7f53684c-d94e-4261-b991-e90de349ea44 00:19:57.823 15:45:19 -- ftl/restore.sh@43 -- # split_bdev=0c99a899-edbd-45b9-8979-29937f238da0 00:19:57.823 15:45:19 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:19:57.823 15:45:19 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 0c99a899-edbd-45b9-8979-29937f238da0 00:19:57.823 15:45:19 -- ftl/common.sh@35 -- # local name=nvc0 00:19:57.823 15:45:19 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:19:57.823 15:45:19 -- ftl/common.sh@37 -- # local base_bdev=0c99a899-edbd-45b9-8979-29937f238da0 00:19:57.823 15:45:19 -- ftl/common.sh@38 -- # local cache_size= 00:19:57.823 15:45:19 -- ftl/common.sh@41 -- # get_bdev_size 0c99a899-edbd-45b9-8979-29937f238da0 00:19:57.823 15:45:19 -- common/autotest_common.sh@1357 -- # local bdev_name=0c99a899-edbd-45b9-8979-29937f238da0 00:19:57.823 15:45:19 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:57.823 15:45:19 -- common/autotest_common.sh@1359 -- # local bs 00:19:57.823 15:45:19 -- common/autotest_common.sh@1360 -- # local nb 00:19:57.823 15:45:19 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0c99a899-edbd-45b9-8979-29937f238da0 00:19:58.080 15:45:19 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:58.080 { 00:19:58.080 "name": "0c99a899-edbd-45b9-8979-29937f238da0", 00:19:58.080 "aliases": [ 00:19:58.080 "lvs/nvme0n1p0" 00:19:58.080 ], 00:19:58.080 "product_name": "Logical Volume", 00:19:58.080 "block_size": 4096, 00:19:58.080 "num_blocks": 26476544, 00:19:58.081 "uuid": "0c99a899-edbd-45b9-8979-29937f238da0", 00:19:58.081 "assigned_rate_limits": { 00:19:58.081 "rw_ios_per_sec": 0, 00:19:58.081 "rw_mbytes_per_sec": 0, 00:19:58.081 "r_mbytes_per_sec": 0, 00:19:58.081 "w_mbytes_per_sec": 0 00:19:58.081 }, 00:19:58.081 "claimed": false, 00:19:58.081 "zoned": false, 00:19:58.081 "supported_io_types": { 00:19:58.081 "read": true, 00:19:58.081 "write": true, 00:19:58.081 "unmap": true, 00:19:58.081 "write_zeroes": true, 00:19:58.081 "flush": false, 00:19:58.081 "reset": true, 00:19:58.081 "compare": false, 00:19:58.081 "compare_and_write": false, 00:19:58.081 "abort": false, 00:19:58.081 "nvme_admin": false, 00:19:58.081 "nvme_io": false 00:19:58.081 }, 00:19:58.081 "driver_specific": { 00:19:58.081 "lvol": { 00:19:58.081 "lvol_store_uuid": "7f53684c-d94e-4261-b991-e90de349ea44", 00:19:58.081 "base_bdev": "nvme0n1", 00:19:58.081 "thin_provision": true, 00:19:58.081 "snapshot": false, 00:19:58.081 "clone": false, 00:19:58.081 "esnap_clone": false 00:19:58.081 } 00:19:58.081 } 00:19:58.081 } 00:19:58.081 ]' 00:19:58.081 15:45:19 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:58.081 15:45:19 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:58.081 15:45:19 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:58.338 15:45:19 -- common/autotest_common.sh@1363 -- # nb=26476544 00:19:58.338 15:45:19 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:19:58.338 15:45:19 -- common/autotest_common.sh@1367 -- # echo 103424 00:19:58.338 15:45:19 -- ftl/common.sh@41 -- # local base_size=5171 00:19:58.338 15:45:19 -- ftl/common.sh@44 -- # local nvc_bdev 00:19:58.338 15:45:19 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:19:58.596 15:45:20 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:58.596 15:45:20 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:58.596 15:45:20 -- ftl/common.sh@48 -- # get_bdev_size 0c99a899-edbd-45b9-8979-29937f238da0 00:19:58.596 15:45:20 -- common/autotest_common.sh@1357 -- # local bdev_name=0c99a899-edbd-45b9-8979-29937f238da0 00:19:58.596 15:45:20 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:58.596 15:45:20 -- common/autotest_common.sh@1359 -- # local bs 00:19:58.596 15:45:20 -- common/autotest_common.sh@1360 -- # local nb 00:19:58.596 15:45:20 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0c99a899-edbd-45b9-8979-29937f238da0 00:19:58.853 15:45:20 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:58.853 { 00:19:58.853 "name": "0c99a899-edbd-45b9-8979-29937f238da0", 00:19:58.853 "aliases": [ 00:19:58.853 "lvs/nvme0n1p0" 00:19:58.853 ], 00:19:58.853 "product_name": "Logical Volume", 00:19:58.853 "block_size": 4096, 00:19:58.853 "num_blocks": 26476544, 00:19:58.853 "uuid": "0c99a899-edbd-45b9-8979-29937f238da0", 00:19:58.853 "assigned_rate_limits": { 00:19:58.853 "rw_ios_per_sec": 0, 00:19:58.853 "rw_mbytes_per_sec": 0, 00:19:58.853 "r_mbytes_per_sec": 0, 00:19:58.853 "w_mbytes_per_sec": 0 00:19:58.853 }, 00:19:58.853 "claimed": false, 00:19:58.853 "zoned": false, 00:19:58.853 "supported_io_types": { 00:19:58.853 "read": true, 00:19:58.853 "write": true, 00:19:58.853 "unmap": true, 00:19:58.853 "write_zeroes": true, 00:19:58.853 "flush": false, 00:19:58.853 "reset": true, 00:19:58.853 "compare": false, 00:19:58.853 "compare_and_write": false, 00:19:58.853 "abort": false, 00:19:58.853 "nvme_admin": false, 00:19:58.853 "nvme_io": false 00:19:58.853 }, 00:19:58.853 "driver_specific": { 00:19:58.853 "lvol": { 00:19:58.853 "lvol_store_uuid": "7f53684c-d94e-4261-b991-e90de349ea44", 00:19:58.853 "base_bdev": "nvme0n1", 00:19:58.853 "thin_provision": true, 00:19:58.853 "snapshot": false, 00:19:58.853 "clone": false, 00:19:58.853 "esnap_clone": false 00:19:58.853 } 00:19:58.853 } 00:19:58.853 } 00:19:58.853 ]' 00:19:58.853 15:45:20 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:58.853 15:45:20 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:58.853 15:45:20 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:58.853 15:45:20 -- common/autotest_common.sh@1363 -- # nb=26476544 00:19:58.853 15:45:20 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:19:58.853 15:45:20 -- common/autotest_common.sh@1367 -- # echo 103424 00:19:58.853 15:45:20 -- ftl/common.sh@48 -- # cache_size=5171 00:19:58.853 15:45:20 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:59.111 15:45:20 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:59.111 15:45:20 -- ftl/restore.sh@48 -- # get_bdev_size 0c99a899-edbd-45b9-8979-29937f238da0 00:19:59.111 15:45:20 -- common/autotest_common.sh@1357 -- # local bdev_name=0c99a899-edbd-45b9-8979-29937f238da0 00:19:59.111 15:45:20 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:59.111 15:45:20 -- common/autotest_common.sh@1359 -- # local bs 00:19:59.111 15:45:20 -- common/autotest_common.sh@1360 -- # local nb 00:19:59.111 15:45:20 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0c99a899-edbd-45b9-8979-29937f238da0 00:19:59.370 15:45:20 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:59.370 { 00:19:59.370 "name": "0c99a899-edbd-45b9-8979-29937f238da0", 00:19:59.370 "aliases": [ 00:19:59.370 "lvs/nvme0n1p0" 00:19:59.370 ], 00:19:59.370 "product_name": "Logical Volume", 00:19:59.370 "block_size": 4096, 00:19:59.370 "num_blocks": 26476544, 00:19:59.370 "uuid": "0c99a899-edbd-45b9-8979-29937f238da0", 00:19:59.370 "assigned_rate_limits": { 00:19:59.370 "rw_ios_per_sec": 0, 00:19:59.370 "rw_mbytes_per_sec": 0, 00:19:59.370 "r_mbytes_per_sec": 0, 00:19:59.370 "w_mbytes_per_sec": 0 00:19:59.370 }, 00:19:59.370 "claimed": false, 00:19:59.370 "zoned": false, 00:19:59.370 "supported_io_types": { 00:19:59.370 "read": true, 00:19:59.370 "write": true, 00:19:59.370 "unmap": true, 00:19:59.370 "write_zeroes": true, 00:19:59.370 "flush": false, 00:19:59.370 "reset": true, 00:19:59.370 "compare": false, 00:19:59.370 "compare_and_write": false, 00:19:59.370 "abort": false, 00:19:59.370 "nvme_admin": false, 00:19:59.370 "nvme_io": false 00:19:59.370 }, 00:19:59.370 "driver_specific": { 00:19:59.370 "lvol": { 00:19:59.370 "lvol_store_uuid": "7f53684c-d94e-4261-b991-e90de349ea44", 00:19:59.370 "base_bdev": "nvme0n1", 00:19:59.370 "thin_provision": true, 00:19:59.370 "snapshot": false, 00:19:59.370 "clone": false, 00:19:59.370 "esnap_clone": false 00:19:59.370 } 00:19:59.370 } 00:19:59.370 } 00:19:59.370 ]' 00:19:59.370 15:45:20 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:59.370 15:45:20 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:59.370 15:45:20 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:59.628 15:45:20 -- common/autotest_common.sh@1363 -- # nb=26476544 00:19:59.628 15:45:20 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:19:59.628 15:45:20 -- common/autotest_common.sh@1367 -- # echo 103424 00:19:59.628 15:45:20 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:59.628 15:45:20 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0c99a899-edbd-45b9-8979-29937f238da0 --l2p_dram_limit 10' 00:19:59.628 15:45:20 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:59.628 15:45:20 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:19:59.628 15:45:20 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:59.628 15:45:20 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:59.628 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:59.628 15:45:20 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0c99a899-edbd-45b9-8979-29937f238da0 --l2p_dram_limit 10 -c nvc0n1p0 00:19:59.888 [2024-07-24 15:45:21.265754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.888 [2024-07-24 15:45:21.265827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:59.888 [2024-07-24 15:45:21.265857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:59.888 [2024-07-24 15:45:21.265879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.888 [2024-07-24 15:45:21.265970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.888 [2024-07-24 15:45:21.265989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:59.888 [2024-07-24 15:45:21.266005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:59.888 [2024-07-24 15:45:21.266018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.888 [2024-07-24 15:45:21.266051] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:59.888 [2024-07-24 15:45:21.267126] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:59.888 [2024-07-24 15:45:21.267183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.888 [2024-07-24 15:45:21.267201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:59.888 [2024-07-24 15:45:21.267217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.133 ms 00:19:59.888 [2024-07-24 15:45:21.267229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.888 [2024-07-24 15:45:21.267388] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2296fe34-9ba1-4bd9-945f-27e23b547ff8 00:19:59.888 [2024-07-24 15:45:21.268504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.888 [2024-07-24 15:45:21.268545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:59.888 [2024-07-24 15:45:21.268562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:59.888 [2024-07-24 15:45:21.268576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.888 [2024-07-24 15:45:21.273566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.888 [2024-07-24 15:45:21.273637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:59.888 [2024-07-24 15:45:21.273656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.888 ms 00:19:59.888 [2024-07-24 15:45:21.273671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.888 [2024-07-24 15:45:21.273827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.888 [2024-07-24 15:45:21.273858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:59.888 [2024-07-24 15:45:21.273873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:19:59.888 [2024-07-24 15:45:21.273892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.888 [2024-07-24 15:45:21.273963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.888 [2024-07-24 15:45:21.273989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:59.888 [2024-07-24 15:45:21.274010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:59.888 [2024-07-24 15:45:21.274024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.888 [2024-07-24 15:45:21.274062] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:59.888 [2024-07-24 15:45:21.278743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.888 [2024-07-24 15:45:21.278786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:59.888 [2024-07-24 15:45:21.278808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.689 ms 00:19:59.888 [2024-07-24 15:45:21.278820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.888 [2024-07-24 15:45:21.278871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.888 [2024-07-24 15:45:21.278893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:59.888 [2024-07-24 15:45:21.278927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:59.888 [2024-07-24 15:45:21.278942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.888 [2024-07-24 15:45:21.279021] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:59.888 [2024-07-24 15:45:21.279188] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:59.888 [2024-07-24 15:45:21.279230] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:59.888 [2024-07-24 15:45:21.279257] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:59.888 [2024-07-24 15:45:21.279296] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:59.888 [2024-07-24 15:45:21.279318] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:59.888 [2024-07-24 15:45:21.279345] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:59.888 [2024-07-24 15:45:21.279367] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:59.888 [2024-07-24 15:45:21.279401] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:59.888 [2024-07-24 15:45:21.279423] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:59.888 [2024-07-24 15:45:21.279449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.888 [2024-07-24 15:45:21.279471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:59.888 [2024-07-24 15:45:21.279519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:19:59.888 [2024-07-24 15:45:21.279542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.888 [2024-07-24 15:45:21.279652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.888 [2024-07-24 15:45:21.279682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:59.888 [2024-07-24 15:45:21.279712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:59.888 [2024-07-24 15:45:21.279736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.888 [2024-07-24 15:45:21.279871] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:59.888 [2024-07-24 15:45:21.279919] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:59.888 [2024-07-24 15:45:21.279940] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:59.888 [2024-07-24 15:45:21.279953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.888 [2024-07-24 15:45:21.279967] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:59.888 [2024-07-24 15:45:21.279978] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:59.888 [2024-07-24 15:45:21.280125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:59.888 [2024-07-24 15:45:21.280142] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:59.889 [2024-07-24 15:45:21.280160] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:59.889 [2024-07-24 15:45:21.280178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:59.889 [2024-07-24 15:45:21.280200] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:59.889 [2024-07-24 15:45:21.280212] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:59.889 [2024-07-24 15:45:21.280227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:59.889 [2024-07-24 15:45:21.280238] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:59.889 [2024-07-24 15:45:21.280250] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:59.889 [2024-07-24 15:45:21.280260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.889 [2024-07-24 15:45:21.280275] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:59.889 [2024-07-24 15:45:21.280286] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:59.889 [2024-07-24 15:45:21.280298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.889 [2024-07-24 15:45:21.280308] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:59.889 [2024-07-24 15:45:21.280321] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:59.889 [2024-07-24 15:45:21.280332] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:59.889 [2024-07-24 15:45:21.280344] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:59.889 [2024-07-24 15:45:21.280355] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:59.889 [2024-07-24 15:45:21.280367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:59.889 [2024-07-24 15:45:21.280377] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:59.889 [2024-07-24 15:45:21.280390] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:59.889 [2024-07-24 15:45:21.280400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:59.889 [2024-07-24 15:45:21.280419] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:59.889 [2024-07-24 15:45:21.280434] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:59.889 [2024-07-24 15:45:21.280448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:59.889 [2024-07-24 15:45:21.280458] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:59.889 [2024-07-24 15:45:21.280473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:59.889 [2024-07-24 15:45:21.280483] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:59.889 [2024-07-24 15:45:21.280496] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:59.889 [2024-07-24 15:45:21.280506] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:59.889 [2024-07-24 15:45:21.280521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:59.889 [2024-07-24 15:45:21.280538] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:59.889 [2024-07-24 15:45:21.280565] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:59.889 [2024-07-24 15:45:21.280587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:59.889 [2024-07-24 15:45:21.280610] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:59.889 [2024-07-24 15:45:21.280635] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:59.889 [2024-07-24 15:45:21.280656] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:59.889 [2024-07-24 15:45:21.280668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.889 [2024-07-24 15:45:21.280682] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:59.889 [2024-07-24 15:45:21.280693] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:59.889 [2024-07-24 15:45:21.280706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:59.889 [2024-07-24 15:45:21.280717] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:59.889 [2024-07-24 15:45:21.280731] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:59.889 [2024-07-24 15:45:21.280743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:59.889 [2024-07-24 15:45:21.280767] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:59.889 [2024-07-24 15:45:21.280793] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:59.889 [2024-07-24 15:45:21.280810] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:59.889 [2024-07-24 15:45:21.280822] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:59.889 [2024-07-24 15:45:21.280836] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:59.889 [2024-07-24 15:45:21.280855] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:59.889 [2024-07-24 15:45:21.280877] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:59.889 [2024-07-24 15:45:21.280890] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:59.889 [2024-07-24 15:45:21.280903] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:59.889 [2024-07-24 15:45:21.280915] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:59.889 [2024-07-24 15:45:21.280928] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:59.889 [2024-07-24 15:45:21.280940] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:59.889 [2024-07-24 15:45:21.280953] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:59.889 [2024-07-24 15:45:21.280965] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:59.889 [2024-07-24 15:45:21.280982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:59.889 [2024-07-24 15:45:21.280994] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:59.889 [2024-07-24 15:45:21.281012] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:59.889 [2024-07-24 15:45:21.281035] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:59.889 [2024-07-24 15:45:21.281061] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:59.889 [2024-07-24 15:45:21.281078] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:59.889 [2024-07-24 15:45:21.281109] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:59.889 [2024-07-24 15:45:21.281128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.889 [2024-07-24 15:45:21.281142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:59.889 [2024-07-24 15:45:21.281157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.323 ms 00:19:59.889 [2024-07-24 15:45:21.281182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.889 [2024-07-24 15:45:21.299970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.889 [2024-07-24 15:45:21.300037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:59.889 [2024-07-24 15:45:21.300059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.705 ms 00:19:59.889 [2024-07-24 15:45:21.300074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.889 [2024-07-24 15:45:21.300223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.889 [2024-07-24 15:45:21.300247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:59.889 [2024-07-24 15:45:21.300261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:59.889 [2024-07-24 15:45:21.300277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.889 [2024-07-24 15:45:21.340362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.889 [2024-07-24 15:45:21.340436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:59.889 [2024-07-24 15:45:21.340459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.001 ms 00:19:59.889 [2024-07-24 15:45:21.340474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.889 [2024-07-24 15:45:21.340539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.889 [2024-07-24 15:45:21.340557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:59.889 [2024-07-24 15:45:21.340571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:59.889 [2024-07-24 15:45:21.340585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.889 [2024-07-24 15:45:21.340982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.889 [2024-07-24 15:45:21.341009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:59.889 [2024-07-24 15:45:21.341023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:19:59.889 [2024-07-24 15:45:21.341038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.889 [2024-07-24 15:45:21.341200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.889 [2024-07-24 15:45:21.341226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:59.889 [2024-07-24 15:45:21.341241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:19:59.889 [2024-07-24 15:45:21.341255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.889 [2024-07-24 15:45:21.359797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.889 [2024-07-24 15:45:21.359867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:59.889 [2024-07-24 15:45:21.359890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.511 ms 00:19:59.889 [2024-07-24 15:45:21.359904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.889 [2024-07-24 15:45:21.373777] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:59.889 [2024-07-24 15:45:21.376670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.889 [2024-07-24 15:45:21.376715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:59.890 [2024-07-24 15:45:21.376739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.609 ms 00:19:59.890 [2024-07-24 15:45:21.376753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.890 [2024-07-24 15:45:21.473729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.890 [2024-07-24 15:45:21.474000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:59.890 [2024-07-24 15:45:21.474180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.909 ms 00:19:59.890 [2024-07-24 15:45:21.474210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.890 [2024-07-24 15:45:21.474286] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:19:59.890 [2024-07-24 15:45:21.474309] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:20:05.153 [2024-07-24 15:45:25.672165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.153 [2024-07-24 15:45:25.672262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:05.153 [2024-07-24 15:45:25.672290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4197.889 ms 00:20:05.153 [2024-07-24 15:45:25.672304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.153 [2024-07-24 15:45:25.672574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.153 [2024-07-24 15:45:25.672595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:05.153 [2024-07-24 15:45:25.672612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:20:05.153 [2024-07-24 15:45:25.672628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.153 [2024-07-24 15:45:25.712768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.153 [2024-07-24 15:45:25.712853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:05.153 [2024-07-24 15:45:25.712881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.020 ms 00:20:05.153 [2024-07-24 15:45:25.712895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.153 [2024-07-24 15:45:25.745500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.153 [2024-07-24 15:45:25.745600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:05.153 [2024-07-24 15:45:25.745632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.496 ms 00:20:05.153 [2024-07-24 15:45:25.745645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.153 [2024-07-24 15:45:25.746132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.153 [2024-07-24 15:45:25.746156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:05.153 [2024-07-24 15:45:25.746173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:20:05.153 [2024-07-24 15:45:25.746186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.153 [2024-07-24 15:45:25.837241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.153 [2024-07-24 15:45:25.837324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:05.153 [2024-07-24 15:45:25.837351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.940 ms 00:20:05.153 [2024-07-24 15:45:25.837364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.153 [2024-07-24 15:45:25.871053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.153 [2024-07-24 15:45:25.871139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:05.153 [2024-07-24 15:45:25.871171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.591 ms 00:20:05.153 [2024-07-24 15:45:25.871185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.153 [2024-07-24 15:45:25.873901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.153 [2024-07-24 15:45:25.873973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:05.153 [2024-07-24 15:45:25.874010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.640 ms 00:20:05.153 [2024-07-24 15:45:25.874033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.153 [2024-07-24 15:45:25.909972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.153 [2024-07-24 15:45:25.910047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:05.153 [2024-07-24 15:45:25.910073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.777 ms 00:20:05.153 [2024-07-24 15:45:25.910105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.153 [2024-07-24 15:45:25.910201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.153 [2024-07-24 15:45:25.910223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:05.153 [2024-07-24 15:45:25.910240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:05.153 [2024-07-24 15:45:25.910252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.153 [2024-07-24 15:45:25.910390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.153 [2024-07-24 15:45:25.910420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:05.153 [2024-07-24 15:45:25.910437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:05.153 [2024-07-24 15:45:25.910450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.153 [2024-07-24 15:45:25.911702] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4645.447 ms, result 0 00:20:05.153 { 00:20:05.153 "name": "ftl0", 00:20:05.153 "uuid": "2296fe34-9ba1-4bd9-945f-27e23b547ff8" 00:20:05.153 } 00:20:05.153 15:45:25 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:05.153 15:45:25 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:05.153 15:45:26 -- ftl/restore.sh@63 -- # echo ']}' 00:20:05.153 15:45:26 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:05.153 [2024-07-24 15:45:26.415152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.153 [2024-07-24 15:45:26.415252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:05.153 [2024-07-24 15:45:26.415286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:05.153 [2024-07-24 15:45:26.415310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.153 [2024-07-24 15:45:26.415365] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:05.153 [2024-07-24 15:45:26.420117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.153 [2024-07-24 15:45:26.420167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:05.153 [2024-07-24 15:45:26.420199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.711 ms 00:20:05.153 [2024-07-24 15:45:26.420219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.153 [2024-07-24 15:45:26.420699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.153 [2024-07-24 15:45:26.420751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:05.153 [2024-07-24 15:45:26.420781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:20:05.153 [2024-07-24 15:45:26.420800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.153 [2024-07-24 15:45:26.425398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.153 [2024-07-24 15:45:26.425452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:05.153 [2024-07-24 15:45:26.425481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.540 ms 00:20:05.153 [2024-07-24 15:45:26.425500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.153 [2024-07-24 15:45:26.434991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.153 [2024-07-24 15:45:26.435041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:05.154 [2024-07-24 15:45:26.435075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.442 ms 00:20:05.154 [2024-07-24 15:45:26.435116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.154 [2024-07-24 15:45:26.474179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.154 [2024-07-24 15:45:26.474243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:05.154 [2024-07-24 15:45:26.474270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.910 ms 00:20:05.154 [2024-07-24 15:45:26.474283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.154 [2024-07-24 15:45:26.495707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.154 [2024-07-24 15:45:26.495784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:05.154 [2024-07-24 15:45:26.495811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.327 ms 00:20:05.154 [2024-07-24 15:45:26.495824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.154 [2024-07-24 15:45:26.496121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.154 [2024-07-24 15:45:26.496158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:05.154 [2024-07-24 15:45:26.496190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:20:05.154 [2024-07-24 15:45:26.496214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.154 [2024-07-24 15:45:26.532125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.154 [2024-07-24 15:45:26.532200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:05.154 [2024-07-24 15:45:26.532229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.848 ms 00:20:05.154 [2024-07-24 15:45:26.532243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.154 [2024-07-24 15:45:26.564878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.154 [2024-07-24 15:45:26.564972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:05.154 [2024-07-24 15:45:26.565000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.554 ms 00:20:05.154 [2024-07-24 15:45:26.565014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.154 [2024-07-24 15:45:26.597047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.154 [2024-07-24 15:45:26.597117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:05.154 [2024-07-24 15:45:26.597143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.889 ms 00:20:05.154 [2024-07-24 15:45:26.597156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.154 [2024-07-24 15:45:26.628665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.154 [2024-07-24 15:45:26.628733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:05.154 [2024-07-24 15:45:26.628759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.354 ms 00:20:05.154 [2024-07-24 15:45:26.628771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.154 [2024-07-24 15:45:26.628845] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:05.154 [2024-07-24 15:45:26.628871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.628894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.628907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.628921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.628933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.628947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.628959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.628974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.628986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.628999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:05.154 [2024-07-24 15:45:26.629736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.629752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.629764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.629777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.629790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.629817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.629830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.629846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.629859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.629873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.629885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.629899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.629911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.629925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.629937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.629950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.629962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.629978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.629990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:05.155 [2024-07-24 15:45:26.630277] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:05.155 [2024-07-24 15:45:26.630291] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2296fe34-9ba1-4bd9-945f-27e23b547ff8 00:20:05.155 [2024-07-24 15:45:26.630305] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:05.155 [2024-07-24 15:45:26.630318] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:05.155 [2024-07-24 15:45:26.630329] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:05.155 [2024-07-24 15:45:26.630343] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:05.155 [2024-07-24 15:45:26.630354] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:05.155 [2024-07-24 15:45:26.630367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:05.155 [2024-07-24 15:45:26.630379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:05.155 [2024-07-24 15:45:26.630392] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:05.155 [2024-07-24 15:45:26.630402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:05.155 [2024-07-24 15:45:26.630418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.155 [2024-07-24 15:45:26.630430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:05.155 [2024-07-24 15:45:26.630445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.579 ms 00:20:05.155 [2024-07-24 15:45:26.630459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.155 [2024-07-24 15:45:26.648285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.155 [2024-07-24 15:45:26.648370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:05.155 [2024-07-24 15:45:26.648403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.738 ms 00:20:05.155 [2024-07-24 15:45:26.648420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.155 [2024-07-24 15:45:26.648767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.155 [2024-07-24 15:45:26.648794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:05.155 [2024-07-24 15:45:26.648821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:20:05.155 [2024-07-24 15:45:26.648839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.155 [2024-07-24 15:45:26.709660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.155 [2024-07-24 15:45:26.709729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:05.155 [2024-07-24 15:45:26.709754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.155 [2024-07-24 15:45:26.709767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.155 [2024-07-24 15:45:26.709862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.155 [2024-07-24 15:45:26.709878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:05.155 [2024-07-24 15:45:26.709896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.155 [2024-07-24 15:45:26.709907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.155 [2024-07-24 15:45:26.710038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.155 [2024-07-24 15:45:26.710059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:05.155 [2024-07-24 15:45:26.710074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.155 [2024-07-24 15:45:26.710110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.155 [2024-07-24 15:45:26.710146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.155 [2024-07-24 15:45:26.710160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:05.155 [2024-07-24 15:45:26.710175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.155 [2024-07-24 15:45:26.710189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.413 [2024-07-24 15:45:26.836739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.413 [2024-07-24 15:45:26.836805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:05.413 [2024-07-24 15:45:26.836831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.413 [2024-07-24 15:45:26.836844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.413 [2024-07-24 15:45:26.879942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.414 [2024-07-24 15:45:26.880015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:05.414 [2024-07-24 15:45:26.880040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.414 [2024-07-24 15:45:26.880057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.414 [2024-07-24 15:45:26.880199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.414 [2024-07-24 15:45:26.880220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:05.414 [2024-07-24 15:45:26.880235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.414 [2024-07-24 15:45:26.880248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.414 [2024-07-24 15:45:26.880316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.414 [2024-07-24 15:45:26.880333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:05.414 [2024-07-24 15:45:26.880347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.414 [2024-07-24 15:45:26.880359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.414 [2024-07-24 15:45:26.880512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.414 [2024-07-24 15:45:26.880552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:05.414 [2024-07-24 15:45:26.880570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.414 [2024-07-24 15:45:26.880582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.414 [2024-07-24 15:45:26.880653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.414 [2024-07-24 15:45:26.880673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:05.414 [2024-07-24 15:45:26.880688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.414 [2024-07-24 15:45:26.880700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.414 [2024-07-24 15:45:26.880755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.414 [2024-07-24 15:45:26.880770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:05.414 [2024-07-24 15:45:26.880786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.414 [2024-07-24 15:45:26.880797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.414 [2024-07-24 15:45:26.880856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.414 [2024-07-24 15:45:26.880873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:05.414 [2024-07-24 15:45:26.880888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.414 [2024-07-24 15:45:26.880900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.414 [2024-07-24 15:45:26.881065] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 465.916 ms, result 0 00:20:05.414 true 00:20:05.414 15:45:26 -- ftl/restore.sh@66 -- # killprocess 73722 00:20:05.414 15:45:26 -- common/autotest_common.sh@926 -- # '[' -z 73722 ']' 00:20:05.414 15:45:26 -- common/autotest_common.sh@930 -- # kill -0 73722 00:20:05.414 15:45:26 -- common/autotest_common.sh@931 -- # uname 00:20:05.414 15:45:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:05.414 15:45:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73722 00:20:05.414 killing process with pid 73722 00:20:05.414 15:45:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:05.414 15:45:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:05.414 15:45:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73722' 00:20:05.414 15:45:26 -- common/autotest_common.sh@945 -- # kill 73722 00:20:05.414 15:45:26 -- common/autotest_common.sh@950 -- # wait 73722 00:20:10.682 15:45:31 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:15.988 262144+0 records in 00:20:15.988 262144+0 records out 00:20:15.988 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.22174 s, 206 MB/s 00:20:15.988 15:45:36 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:17.890 15:45:39 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:17.890 [2024-07-24 15:45:39.290633] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:17.890 [2024-07-24 15:45:39.290767] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74011 ] 00:20:17.890 [2024-07-24 15:45:39.452370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.148 [2024-07-24 15:45:39.642794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.406 [2024-07-24 15:45:39.957762] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:18.406 [2024-07-24 15:45:39.957845] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:18.664 [2024-07-24 15:45:40.119418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.664 [2024-07-24 15:45:40.119487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:18.664 [2024-07-24 15:45:40.119508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:18.664 [2024-07-24 15:45:40.119520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.664 [2024-07-24 15:45:40.119600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.664 [2024-07-24 15:45:40.119621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:18.664 [2024-07-24 15:45:40.119634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:18.664 [2024-07-24 15:45:40.119645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.664 [2024-07-24 15:45:40.119676] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:18.664 [2024-07-24 15:45:40.120739] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:18.664 [2024-07-24 15:45:40.120784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.664 [2024-07-24 15:45:40.120799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:18.665 [2024-07-24 15:45:40.120812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.114 ms 00:20:18.665 [2024-07-24 15:45:40.120823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.665 [2024-07-24 15:45:40.122009] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:18.665 [2024-07-24 15:45:40.138716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.665 [2024-07-24 15:45:40.138787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:18.665 [2024-07-24 15:45:40.138818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.706 ms 00:20:18.665 [2024-07-24 15:45:40.138830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.665 [2024-07-24 15:45:40.138934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.665 [2024-07-24 15:45:40.138955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:18.665 [2024-07-24 15:45:40.138968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:18.665 [2024-07-24 15:45:40.138979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.665 [2024-07-24 15:45:40.143874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.665 [2024-07-24 15:45:40.143936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:18.665 [2024-07-24 15:45:40.143955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.734 ms 00:20:18.665 [2024-07-24 15:45:40.143967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.665 [2024-07-24 15:45:40.144133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.665 [2024-07-24 15:45:40.144155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:18.665 [2024-07-24 15:45:40.144168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:20:18.665 [2024-07-24 15:45:40.144180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.665 [2024-07-24 15:45:40.144260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.665 [2024-07-24 15:45:40.144291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:18.665 [2024-07-24 15:45:40.144305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:18.665 [2024-07-24 15:45:40.144316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.665 [2024-07-24 15:45:40.144365] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:18.665 [2024-07-24 15:45:40.148963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.665 [2024-07-24 15:45:40.149006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:18.665 [2024-07-24 15:45:40.149035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.619 ms 00:20:18.665 [2024-07-24 15:45:40.149047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.665 [2024-07-24 15:45:40.149124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.665 [2024-07-24 15:45:40.149144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:18.665 [2024-07-24 15:45:40.149157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:18.665 [2024-07-24 15:45:40.149168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.665 [2024-07-24 15:45:40.149253] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:18.665 [2024-07-24 15:45:40.149302] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:18.665 [2024-07-24 15:45:40.149345] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:18.665 [2024-07-24 15:45:40.149366] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:18.665 [2024-07-24 15:45:40.149465] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:18.665 [2024-07-24 15:45:40.149484] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:18.665 [2024-07-24 15:45:40.149499] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:18.665 [2024-07-24 15:45:40.149550] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:18.665 [2024-07-24 15:45:40.149566] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:18.665 [2024-07-24 15:45:40.149584] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:18.665 [2024-07-24 15:45:40.149595] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:18.665 [2024-07-24 15:45:40.149606] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:18.665 [2024-07-24 15:45:40.149624] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:18.665 [2024-07-24 15:45:40.149643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.665 [2024-07-24 15:45:40.149656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:18.665 [2024-07-24 15:45:40.149668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:20:18.665 [2024-07-24 15:45:40.149679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.665 [2024-07-24 15:45:40.149762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.665 [2024-07-24 15:45:40.149779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:18.665 [2024-07-24 15:45:40.149795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:18.665 [2024-07-24 15:45:40.149805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.665 [2024-07-24 15:45:40.149909] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:18.665 [2024-07-24 15:45:40.149930] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:18.665 [2024-07-24 15:45:40.149942] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:18.665 [2024-07-24 15:45:40.149954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.665 [2024-07-24 15:45:40.149965] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:18.665 [2024-07-24 15:45:40.149976] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:18.665 [2024-07-24 15:45:40.149986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:18.665 [2024-07-24 15:45:40.149998] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:18.665 [2024-07-24 15:45:40.150009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:18.665 [2024-07-24 15:45:40.150019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:18.665 [2024-07-24 15:45:40.150029] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:18.665 [2024-07-24 15:45:40.150039] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:18.665 [2024-07-24 15:45:40.150058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:18.665 [2024-07-24 15:45:40.150068] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:18.665 [2024-07-24 15:45:40.150079] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:18.665 [2024-07-24 15:45:40.150322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.665 [2024-07-24 15:45:40.150372] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:18.665 [2024-07-24 15:45:40.150477] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:18.665 [2024-07-24 15:45:40.150524] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.665 [2024-07-24 15:45:40.150571] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:18.665 [2024-07-24 15:45:40.150615] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:18.665 [2024-07-24 15:45:40.150721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:18.665 [2024-07-24 15:45:40.150770] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:18.665 [2024-07-24 15:45:40.150808] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:18.665 [2024-07-24 15:45:40.150891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:18.665 [2024-07-24 15:45:40.150988] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:18.665 [2024-07-24 15:45:40.151157] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:18.665 [2024-07-24 15:45:40.151281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:18.665 [2024-07-24 15:45:40.151335] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:18.665 [2024-07-24 15:45:40.151476] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:18.665 [2024-07-24 15:45:40.151545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:18.665 [2024-07-24 15:45:40.151656] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:18.665 [2024-07-24 15:45:40.151739] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:18.665 [2024-07-24 15:45:40.151841] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:18.665 [2024-07-24 15:45:40.151864] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:18.665 [2024-07-24 15:45:40.151876] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:18.665 [2024-07-24 15:45:40.151886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:18.665 [2024-07-24 15:45:40.151897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:18.665 [2024-07-24 15:45:40.151908] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:18.665 [2024-07-24 15:45:40.151918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:18.665 [2024-07-24 15:45:40.151927] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:18.665 [2024-07-24 15:45:40.151939] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:18.665 [2024-07-24 15:45:40.151951] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:18.665 [2024-07-24 15:45:40.151972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.665 [2024-07-24 15:45:40.151989] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:18.665 [2024-07-24 15:45:40.152000] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:18.665 [2024-07-24 15:45:40.152011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:18.665 [2024-07-24 15:45:40.152021] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:18.665 [2024-07-24 15:45:40.152032] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:18.666 [2024-07-24 15:45:40.152045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:18.666 [2024-07-24 15:45:40.152061] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:18.666 [2024-07-24 15:45:40.152076] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:18.666 [2024-07-24 15:45:40.152103] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:18.666 [2024-07-24 15:45:40.152117] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:18.666 [2024-07-24 15:45:40.152129] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:18.666 [2024-07-24 15:45:40.152140] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:18.666 [2024-07-24 15:45:40.152151] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:18.666 [2024-07-24 15:45:40.152162] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:18.666 [2024-07-24 15:45:40.152173] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:18.666 [2024-07-24 15:45:40.152185] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:18.666 [2024-07-24 15:45:40.152196] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:18.666 [2024-07-24 15:45:40.152207] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:18.666 [2024-07-24 15:45:40.152218] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:18.666 [2024-07-24 15:45:40.152232] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:18.666 [2024-07-24 15:45:40.152253] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:18.666 [2024-07-24 15:45:40.152271] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:18.666 [2024-07-24 15:45:40.152285] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:18.666 [2024-07-24 15:45:40.152310] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:18.666 [2024-07-24 15:45:40.152328] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:18.666 [2024-07-24 15:45:40.152343] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:18.666 [2024-07-24 15:45:40.152354] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:18.666 [2024-07-24 15:45:40.152383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.666 [2024-07-24 15:45:40.152398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:18.666 [2024-07-24 15:45:40.152411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.519 ms 00:20:18.666 [2024-07-24 15:45:40.152422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.666 [2024-07-24 15:45:40.172475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.666 [2024-07-24 15:45:40.172542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:18.666 [2024-07-24 15:45:40.172564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.978 ms 00:20:18.666 [2024-07-24 15:45:40.172576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.666 [2024-07-24 15:45:40.172698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.666 [2024-07-24 15:45:40.172721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:18.666 [2024-07-24 15:45:40.172737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:18.666 [2024-07-24 15:45:40.172752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.666 [2024-07-24 15:45:40.225762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.666 [2024-07-24 15:45:40.225833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:18.666 [2024-07-24 15:45:40.225856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.916 ms 00:20:18.666 [2024-07-24 15:45:40.225874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.666 [2024-07-24 15:45:40.225949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.666 [2024-07-24 15:45:40.225966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:18.666 [2024-07-24 15:45:40.225979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:18.666 [2024-07-24 15:45:40.225990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.666 [2024-07-24 15:45:40.226430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.666 [2024-07-24 15:45:40.226464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:18.666 [2024-07-24 15:45:40.226479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:20:18.666 [2024-07-24 15:45:40.226491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.666 [2024-07-24 15:45:40.226650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.666 [2024-07-24 15:45:40.226677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:18.666 [2024-07-24 15:45:40.226691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:20:18.666 [2024-07-24 15:45:40.226701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.666 [2024-07-24 15:45:40.245701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.666 [2024-07-24 15:45:40.245764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:18.666 [2024-07-24 15:45:40.245786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.967 ms 00:20:18.666 [2024-07-24 15:45:40.245798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-07-24 15:45:40.269645] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:18.925 [2024-07-24 15:45:40.269763] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:18.925 [2024-07-24 15:45:40.269797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-07-24 15:45:40.269817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:18.925 [2024-07-24 15:45:40.269841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.816 ms 00:20:18.925 [2024-07-24 15:45:40.269859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-07-24 15:45:40.311011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-07-24 15:45:40.311169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:18.925 [2024-07-24 15:45:40.311201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.034 ms 00:20:18.925 [2024-07-24 15:45:40.311221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-07-24 15:45:40.335438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-07-24 15:45:40.335579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:18.925 [2024-07-24 15:45:40.335618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.091 ms 00:20:18.925 [2024-07-24 15:45:40.335641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-07-24 15:45:40.360003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-07-24 15:45:40.360143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:18.925 [2024-07-24 15:45:40.360176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.234 ms 00:20:18.925 [2024-07-24 15:45:40.360194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-07-24 15:45:40.360971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-07-24 15:45:40.361031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:18.925 [2024-07-24 15:45:40.361054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:20:18.925 [2024-07-24 15:45:40.361072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-07-24 15:45:40.446723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-07-24 15:45:40.446807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:18.925 [2024-07-24 15:45:40.446828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.583 ms 00:20:18.925 [2024-07-24 15:45:40.446841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-07-24 15:45:40.460058] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:18.925 [2024-07-24 15:45:40.462983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-07-24 15:45:40.463028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:18.925 [2024-07-24 15:45:40.463048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.052 ms 00:20:18.925 [2024-07-24 15:45:40.463060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-07-24 15:45:40.463209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-07-24 15:45:40.463231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:18.925 [2024-07-24 15:45:40.463249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:18.925 [2024-07-24 15:45:40.463261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-07-24 15:45:40.463362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-07-24 15:45:40.463390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:18.925 [2024-07-24 15:45:40.463404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:18.925 [2024-07-24 15:45:40.463425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-07-24 15:45:40.465362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-07-24 15:45:40.465402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:18.925 [2024-07-24 15:45:40.465418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.905 ms 00:20:18.925 [2024-07-24 15:45:40.465436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-07-24 15:45:40.465476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-07-24 15:45:40.465492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:18.925 [2024-07-24 15:45:40.465504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:18.925 [2024-07-24 15:45:40.465515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-07-24 15:45:40.465563] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:18.925 [2024-07-24 15:45:40.465589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-07-24 15:45:40.465601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:18.925 [2024-07-24 15:45:40.465613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:18.925 [2024-07-24 15:45:40.465624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-07-24 15:45:40.497437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-07-24 15:45:40.497528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:18.925 [2024-07-24 15:45:40.497550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.776 ms 00:20:18.925 [2024-07-24 15:45:40.497562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-07-24 15:45:40.497686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-07-24 15:45:40.497705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:18.925 [2024-07-24 15:45:40.497726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:18.925 [2024-07-24 15:45:40.497759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-07-24 15:45:40.499176] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 379.181 ms, result 0 00:20:54.506  Copying: 28/1024 [MB] (28 MBps) Copying: 58/1024 [MB] (29 MBps) Copying: 86/1024 [MB] (28 MBps) Copying: 116/1024 [MB] (29 MBps) Copying: 144/1024 [MB] (28 MBps) Copying: 171/1024 [MB] (26 MBps) Copying: 199/1024 [MB] (27 MBps) Copying: 227/1024 [MB] (28 MBps) Copying: 256/1024 [MB] (29 MBps) Copying: 286/1024 [MB] (29 MBps) Copying: 314/1024 [MB] (28 MBps) Copying: 341/1024 [MB] (27 MBps) Copying: 369/1024 [MB] (27 MBps) Copying: 397/1024 [MB] (28 MBps) Copying: 425/1024 [MB] (28 MBps) Copying: 454/1024 [MB] (28 MBps) Copying: 482/1024 [MB] (28 MBps) Copying: 510/1024 [MB] (28 MBps) Copying: 539/1024 [MB] (29 MBps) Copying: 569/1024 [MB] (30 MBps) Copying: 600/1024 [MB] (30 MBps) Copying: 630/1024 [MB] (30 MBps) Copying: 658/1024 [MB] (27 MBps) Copying: 687/1024 [MB] (29 MBps) Copying: 717/1024 [MB] (30 MBps) Copying: 747/1024 [MB] (29 MBps) Copying: 776/1024 [MB] (29 MBps) Copying: 806/1024 [MB] (29 MBps) Copying: 835/1024 [MB] (29 MBps) Copying: 865/1024 [MB] (30 MBps) Copying: 893/1024 [MB] (28 MBps) Copying: 925/1024 [MB] (31 MBps) Copying: 956/1024 [MB] (30 MBps) Copying: 987/1024 [MB] (31 MBps) Copying: 1015/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 29 MBps)[2024-07-24 15:46:15.803870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.506 [2024-07-24 15:46:15.803931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:54.506 [2024-07-24 15:46:15.803952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:54.506 [2024-07-24 15:46:15.803965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.506 [2024-07-24 15:46:15.803995] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:54.506 [2024-07-24 15:46:15.807445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.506 [2024-07-24 15:46:15.807491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:54.506 [2024-07-24 15:46:15.807509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.427 ms 00:20:54.506 [2024-07-24 15:46:15.807522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.506 [2024-07-24 15:46:15.808694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.506 [2024-07-24 15:46:15.808741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:54.506 [2024-07-24 15:46:15.808759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.129 ms 00:20:54.506 [2024-07-24 15:46:15.808771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.506 [2024-07-24 15:46:15.823996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.506 [2024-07-24 15:46:15.824068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:54.506 [2024-07-24 15:46:15.824106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.196 ms 00:20:54.506 [2024-07-24 15:46:15.824121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.506 [2024-07-24 15:46:15.830981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.506 [2024-07-24 15:46:15.831047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:54.506 [2024-07-24 15:46:15.831065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.806 ms 00:20:54.506 [2024-07-24 15:46:15.831076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.506 [2024-07-24 15:46:15.864472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.506 [2024-07-24 15:46:15.864579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:54.506 [2024-07-24 15:46:15.864602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.240 ms 00:20:54.506 [2024-07-24 15:46:15.864613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.506 [2024-07-24 15:46:15.883548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.506 [2024-07-24 15:46:15.883634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:54.506 [2024-07-24 15:46:15.883656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.855 ms 00:20:54.506 [2024-07-24 15:46:15.883669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.506 [2024-07-24 15:46:15.883913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.506 [2024-07-24 15:46:15.883941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:54.506 [2024-07-24 15:46:15.883967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:20:54.506 [2024-07-24 15:46:15.883980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.506 [2024-07-24 15:46:15.916451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.506 [2024-07-24 15:46:15.916513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:54.506 [2024-07-24 15:46:15.916541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.446 ms 00:20:54.506 [2024-07-24 15:46:15.916555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.506 [2024-07-24 15:46:15.948545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.506 [2024-07-24 15:46:15.948615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:54.506 [2024-07-24 15:46:15.948637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.925 ms 00:20:54.506 [2024-07-24 15:46:15.948649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.506 [2024-07-24 15:46:15.981801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.506 [2024-07-24 15:46:15.981872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:54.506 [2024-07-24 15:46:15.981905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.084 ms 00:20:54.506 [2024-07-24 15:46:15.981920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.506 [2024-07-24 15:46:16.016197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.506 [2024-07-24 15:46:16.016266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:54.506 [2024-07-24 15:46:16.016287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.112 ms 00:20:54.506 [2024-07-24 15:46:16.016298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.506 [2024-07-24 15:46:16.016370] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:54.506 [2024-07-24 15:46:16.016407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:54.506 [2024-07-24 15:46:16.016432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:54.506 [2024-07-24 15:46:16.016450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:54.506 [2024-07-24 15:46:16.016462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:54.506 [2024-07-24 15:46:16.016474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:54.506 [2024-07-24 15:46:16.016485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:54.506 [2024-07-24 15:46:16.016497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:54.506 [2024-07-24 15:46:16.016510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:54.506 [2024-07-24 15:46:16.016522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:54.506 [2024-07-24 15:46:16.016534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.016992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:54.507 [2024-07-24 15:46:16.017627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:54.508 [2024-07-24 15:46:16.017638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:54.508 [2024-07-24 15:46:16.017650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:54.508 [2024-07-24 15:46:16.017662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:54.508 [2024-07-24 15:46:16.017683] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:54.508 [2024-07-24 15:46:16.017694] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2296fe34-9ba1-4bd9-945f-27e23b547ff8 00:20:54.508 [2024-07-24 15:46:16.017707] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:54.508 [2024-07-24 15:46:16.017733] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:54.508 [2024-07-24 15:46:16.017752] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:54.508 [2024-07-24 15:46:16.017771] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:54.508 [2024-07-24 15:46:16.017782] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:54.508 [2024-07-24 15:46:16.017793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:54.508 [2024-07-24 15:46:16.017803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:54.508 [2024-07-24 15:46:16.017813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:54.508 [2024-07-24 15:46:16.017823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:54.508 [2024-07-24 15:46:16.017835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.508 [2024-07-24 15:46:16.017846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:54.508 [2024-07-24 15:46:16.017858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.468 ms 00:20:54.508 [2024-07-24 15:46:16.017869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.508 [2024-07-24 15:46:16.034807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.508 [2024-07-24 15:46:16.034871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:54.508 [2024-07-24 15:46:16.034904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.841 ms 00:20:54.508 [2024-07-24 15:46:16.034917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.508 [2024-07-24 15:46:16.035204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.508 [2024-07-24 15:46:16.035222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:54.508 [2024-07-24 15:46:16.035241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:20:54.508 [2024-07-24 15:46:16.035253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.508 [2024-07-24 15:46:16.081836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.508 [2024-07-24 15:46:16.081908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:54.508 [2024-07-24 15:46:16.081929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.508 [2024-07-24 15:46:16.081940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.508 [2024-07-24 15:46:16.082022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.508 [2024-07-24 15:46:16.082037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:54.508 [2024-07-24 15:46:16.082058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.508 [2024-07-24 15:46:16.082070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.508 [2024-07-24 15:46:16.082242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.508 [2024-07-24 15:46:16.082270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:54.508 [2024-07-24 15:46:16.082291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.508 [2024-07-24 15:46:16.082309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.508 [2024-07-24 15:46:16.082346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.508 [2024-07-24 15:46:16.082369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:54.508 [2024-07-24 15:46:16.082391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.508 [2024-07-24 15:46:16.082410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.766 [2024-07-24 15:46:16.184832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.766 [2024-07-24 15:46:16.184926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:54.766 [2024-07-24 15:46:16.184959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.766 [2024-07-24 15:46:16.184980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.766 [2024-07-24 15:46:16.225568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.766 [2024-07-24 15:46:16.225640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:54.766 [2024-07-24 15:46:16.225661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.766 [2024-07-24 15:46:16.225673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.766 [2024-07-24 15:46:16.225777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.766 [2024-07-24 15:46:16.225809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:54.766 [2024-07-24 15:46:16.225822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.766 [2024-07-24 15:46:16.225833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.766 [2024-07-24 15:46:16.225890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.766 [2024-07-24 15:46:16.225906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:54.766 [2024-07-24 15:46:16.225918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.766 [2024-07-24 15:46:16.225929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.766 [2024-07-24 15:46:16.226050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.766 [2024-07-24 15:46:16.226075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:54.766 [2024-07-24 15:46:16.226136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.766 [2024-07-24 15:46:16.226162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.766 [2024-07-24 15:46:16.226257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.766 [2024-07-24 15:46:16.226282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:54.766 [2024-07-24 15:46:16.226304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.766 [2024-07-24 15:46:16.226322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.766 [2024-07-24 15:46:16.226387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.766 [2024-07-24 15:46:16.226414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:54.766 [2024-07-24 15:46:16.226446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.766 [2024-07-24 15:46:16.226466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.766 [2024-07-24 15:46:16.226544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.766 [2024-07-24 15:46:16.226570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:54.766 [2024-07-24 15:46:16.226583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.766 [2024-07-24 15:46:16.226595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.766 [2024-07-24 15:46:16.226758] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 422.847 ms, result 0 00:20:56.139 00:20:56.139 00:20:56.139 15:46:17 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:20:56.139 [2024-07-24 15:46:17.512730] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:56.139 [2024-07-24 15:46:17.512874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74397 ] 00:20:56.139 [2024-07-24 15:46:17.693667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.397 [2024-07-24 15:46:17.919536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.666 [2024-07-24 15:46:18.238911] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:56.666 [2024-07-24 15:46:18.238987] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:56.944 [2024-07-24 15:46:18.393034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.944 [2024-07-24 15:46:18.393112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:56.944 [2024-07-24 15:46:18.393135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:56.944 [2024-07-24 15:46:18.393148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.944 [2024-07-24 15:46:18.393221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.944 [2024-07-24 15:46:18.393240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:56.944 [2024-07-24 15:46:18.393253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:56.944 [2024-07-24 15:46:18.393264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.944 [2024-07-24 15:46:18.393296] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:56.944 [2024-07-24 15:46:18.394243] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:56.944 [2024-07-24 15:46:18.394289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.944 [2024-07-24 15:46:18.394305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:56.944 [2024-07-24 15:46:18.394317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:20:56.944 [2024-07-24 15:46:18.394328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.944 [2024-07-24 15:46:18.396389] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:56.944 [2024-07-24 15:46:18.418603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.944 [2024-07-24 15:46:18.418664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:56.944 [2024-07-24 15:46:18.418692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.219 ms 00:20:56.944 [2024-07-24 15:46:18.418705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.944 [2024-07-24 15:46:18.418797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.944 [2024-07-24 15:46:18.418819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:56.944 [2024-07-24 15:46:18.418832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:56.944 [2024-07-24 15:46:18.418844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.944 [2024-07-24 15:46:18.423589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.944 [2024-07-24 15:46:18.423644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:56.944 [2024-07-24 15:46:18.423662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.617 ms 00:20:56.944 [2024-07-24 15:46:18.423673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.944 [2024-07-24 15:46:18.423802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.944 [2024-07-24 15:46:18.423823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:56.944 [2024-07-24 15:46:18.423836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:56.944 [2024-07-24 15:46:18.423848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.944 [2024-07-24 15:46:18.423918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.944 [2024-07-24 15:46:18.423942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:56.944 [2024-07-24 15:46:18.423955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:56.944 [2024-07-24 15:46:18.423966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.944 [2024-07-24 15:46:18.424006] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:56.944 [2024-07-24 15:46:18.428395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.944 [2024-07-24 15:46:18.428439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:56.944 [2024-07-24 15:46:18.428460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.402 ms 00:20:56.944 [2024-07-24 15:46:18.428478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.944 [2024-07-24 15:46:18.428529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.944 [2024-07-24 15:46:18.428545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:56.944 [2024-07-24 15:46:18.428557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:56.944 [2024-07-24 15:46:18.428568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.944 [2024-07-24 15:46:18.428623] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:56.944 [2024-07-24 15:46:18.428658] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:56.944 [2024-07-24 15:46:18.428701] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:56.944 [2024-07-24 15:46:18.428721] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:56.944 [2024-07-24 15:46:18.428812] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:56.944 [2024-07-24 15:46:18.428831] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:56.944 [2024-07-24 15:46:18.428845] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:56.944 [2024-07-24 15:46:18.428861] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:56.944 [2024-07-24 15:46:18.428875] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:56.944 [2024-07-24 15:46:18.428892] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:56.944 [2024-07-24 15:46:18.428903] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:56.944 [2024-07-24 15:46:18.428914] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:56.944 [2024-07-24 15:46:18.428925] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:56.944 [2024-07-24 15:46:18.428937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.944 [2024-07-24 15:46:18.428948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:56.944 [2024-07-24 15:46:18.428960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:20:56.944 [2024-07-24 15:46:18.428971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.944 [2024-07-24 15:46:18.429054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.944 [2024-07-24 15:46:18.429075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:56.944 [2024-07-24 15:46:18.429115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:56.944 [2024-07-24 15:46:18.429128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.944 [2024-07-24 15:46:18.429245] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:56.944 [2024-07-24 15:46:18.429263] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:56.944 [2024-07-24 15:46:18.429275] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:56.944 [2024-07-24 15:46:18.429286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:56.944 [2024-07-24 15:46:18.429298] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:56.944 [2024-07-24 15:46:18.429308] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:56.944 [2024-07-24 15:46:18.429319] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:56.944 [2024-07-24 15:46:18.429329] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:56.944 [2024-07-24 15:46:18.429340] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:56.944 [2024-07-24 15:46:18.429350] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:56.944 [2024-07-24 15:46:18.429361] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:56.944 [2024-07-24 15:46:18.429371] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:56.944 [2024-07-24 15:46:18.429382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:56.945 [2024-07-24 15:46:18.429392] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:56.945 [2024-07-24 15:46:18.429403] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:56.945 [2024-07-24 15:46:18.429413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:56.945 [2024-07-24 15:46:18.429423] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:56.945 [2024-07-24 15:46:18.429439] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:56.945 [2024-07-24 15:46:18.429456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:56.945 [2024-07-24 15:46:18.429474] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:56.945 [2024-07-24 15:46:18.429490] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:56.945 [2024-07-24 15:46:18.429517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:56.945 [2024-07-24 15:46:18.429529] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:56.945 [2024-07-24 15:46:18.429539] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:56.945 [2024-07-24 15:46:18.429549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:56.945 [2024-07-24 15:46:18.429560] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:56.945 [2024-07-24 15:46:18.429570] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:56.945 [2024-07-24 15:46:18.429581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:56.945 [2024-07-24 15:46:18.429591] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:56.945 [2024-07-24 15:46:18.429601] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:56.945 [2024-07-24 15:46:18.429611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:56.945 [2024-07-24 15:46:18.429621] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:56.945 [2024-07-24 15:46:18.429631] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:56.945 [2024-07-24 15:46:18.429641] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:56.945 [2024-07-24 15:46:18.429651] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:56.945 [2024-07-24 15:46:18.429661] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:56.945 [2024-07-24 15:46:18.429671] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:56.945 [2024-07-24 15:46:18.429682] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:56.945 [2024-07-24 15:46:18.429692] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:56.945 [2024-07-24 15:46:18.429702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:56.945 [2024-07-24 15:46:18.429712] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:56.945 [2024-07-24 15:46:18.429723] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:56.945 [2024-07-24 15:46:18.429737] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:56.945 [2024-07-24 15:46:18.429759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:56.945 [2024-07-24 15:46:18.429774] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:56.945 [2024-07-24 15:46:18.429787] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:56.945 [2024-07-24 15:46:18.429797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:56.945 [2024-07-24 15:46:18.429808] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:56.945 [2024-07-24 15:46:18.429818] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:56.945 [2024-07-24 15:46:18.429829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:56.945 [2024-07-24 15:46:18.429840] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:56.945 [2024-07-24 15:46:18.429855] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:56.945 [2024-07-24 15:46:18.429867] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:56.945 [2024-07-24 15:46:18.429878] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:56.945 [2024-07-24 15:46:18.429890] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:56.945 [2024-07-24 15:46:18.429901] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:56.945 [2024-07-24 15:46:18.429912] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:56.945 [2024-07-24 15:46:18.429923] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:56.945 [2024-07-24 15:46:18.429934] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:56.945 [2024-07-24 15:46:18.429945] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:56.945 [2024-07-24 15:46:18.429956] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:56.945 [2024-07-24 15:46:18.429967] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:56.945 [2024-07-24 15:46:18.429978] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:56.945 [2024-07-24 15:46:18.429990] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:56.945 [2024-07-24 15:46:18.430001] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:56.945 [2024-07-24 15:46:18.430012] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:56.945 [2024-07-24 15:46:18.430024] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:56.945 [2024-07-24 15:46:18.430036] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:56.945 [2024-07-24 15:46:18.430048] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:56.945 [2024-07-24 15:46:18.430059] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:56.945 [2024-07-24 15:46:18.430070] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:56.945 [2024-07-24 15:46:18.430082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.945 [2024-07-24 15:46:18.430118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:56.945 [2024-07-24 15:46:18.430138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.883 ms 00:20:56.945 [2024-07-24 15:46:18.430152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.945 [2024-07-24 15:46:18.448630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.945 [2024-07-24 15:46:18.448703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:56.945 [2024-07-24 15:46:18.448724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.418 ms 00:20:56.945 [2024-07-24 15:46:18.448737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.945 [2024-07-24 15:46:18.448864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.945 [2024-07-24 15:46:18.448889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:56.945 [2024-07-24 15:46:18.448903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:56.945 [2024-07-24 15:46:18.448914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.945 [2024-07-24 15:46:18.517736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.945 [2024-07-24 15:46:18.517809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:56.945 [2024-07-24 15:46:18.517834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.729 ms 00:20:56.945 [2024-07-24 15:46:18.517855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.945 [2024-07-24 15:46:18.517941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.945 [2024-07-24 15:46:18.517960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:56.945 [2024-07-24 15:46:18.517975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:56.945 [2024-07-24 15:46:18.517989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.945 [2024-07-24 15:46:18.518478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.945 [2024-07-24 15:46:18.518512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:56.945 [2024-07-24 15:46:18.518529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:20:56.945 [2024-07-24 15:46:18.518543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.945 [2024-07-24 15:46:18.518728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.945 [2024-07-24 15:46:18.518751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:56.945 [2024-07-24 15:46:18.518766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:20:56.945 [2024-07-24 15:46:18.518779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.945 [2024-07-24 15:46:18.539843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.945 [2024-07-24 15:46:18.539920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:56.945 [2024-07-24 15:46:18.539945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.028 ms 00:20:56.945 [2024-07-24 15:46:18.539959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.204 [2024-07-24 15:46:18.560806] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:57.204 [2024-07-24 15:46:18.560908] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:57.204 [2024-07-24 15:46:18.560949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.204 [2024-07-24 15:46:18.560975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:57.204 [2024-07-24 15:46:18.561001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.726 ms 00:20:57.204 [2024-07-24 15:46:18.561030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.204 [2024-07-24 15:46:18.598920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.204 [2024-07-24 15:46:18.599017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:57.204 [2024-07-24 15:46:18.599045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.726 ms 00:20:57.204 [2024-07-24 15:46:18.599061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.204 [2024-07-24 15:46:18.619216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.204 [2024-07-24 15:46:18.619310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:57.204 [2024-07-24 15:46:18.619336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.991 ms 00:20:57.204 [2024-07-24 15:46:18.619350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.204 [2024-07-24 15:46:18.639517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.204 [2024-07-24 15:46:18.639609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:57.204 [2024-07-24 15:46:18.639632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.071 ms 00:20:57.204 [2024-07-24 15:46:18.639646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.204 [2024-07-24 15:46:18.640363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.204 [2024-07-24 15:46:18.640420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:57.204 [2024-07-24 15:46:18.640445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:20:57.204 [2024-07-24 15:46:18.640459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.204 [2024-07-24 15:46:18.734900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.204 [2024-07-24 15:46:18.734984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:57.204 [2024-07-24 15:46:18.735009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.403 ms 00:20:57.204 [2024-07-24 15:46:18.735025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.204 [2024-07-24 15:46:18.754359] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:57.204 [2024-07-24 15:46:18.758311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.204 [2024-07-24 15:46:18.758385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:57.204 [2024-07-24 15:46:18.758419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.129 ms 00:20:57.204 [2024-07-24 15:46:18.758442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.204 [2024-07-24 15:46:18.758627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.204 [2024-07-24 15:46:18.758661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:57.204 [2024-07-24 15:46:18.758686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:57.204 [2024-07-24 15:46:18.758707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.204 [2024-07-24 15:46:18.758844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.204 [2024-07-24 15:46:18.758875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:57.204 [2024-07-24 15:46:18.758918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:57.204 [2024-07-24 15:46:18.758938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.204 [2024-07-24 15:46:18.761857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.204 [2024-07-24 15:46:18.761935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:57.204 [2024-07-24 15:46:18.761966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.869 ms 00:20:57.204 [2024-07-24 15:46:18.761987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.204 [2024-07-24 15:46:18.762051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.204 [2024-07-24 15:46:18.762077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:57.204 [2024-07-24 15:46:18.762146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:57.204 [2024-07-24 15:46:18.762170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.204 [2024-07-24 15:46:18.762242] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:57.204 [2024-07-24 15:46:18.762271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.204 [2024-07-24 15:46:18.762293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:57.204 [2024-07-24 15:46:18.762321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:57.204 [2024-07-24 15:46:18.762344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.462 [2024-07-24 15:46:18.814644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.462 [2024-07-24 15:46:18.814730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:57.462 [2024-07-24 15:46:18.814756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.247 ms 00:20:57.462 [2024-07-24 15:46:18.814771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.462 [2024-07-24 15:46:18.814929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.462 [2024-07-24 15:46:18.814954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:57.462 [2024-07-24 15:46:18.814969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:57.462 [2024-07-24 15:46:18.814983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.462 [2024-07-24 15:46:18.816753] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 422.860 ms, result 0 00:21:37.090  Copying: 22/1024 [MB] (22 MBps) Copying: 50/1024 [MB] (27 MBps) Copying: 77/1024 [MB] (27 MBps) Copying: 105/1024 [MB] (27 MBps) Copying: 131/1024 [MB] (26 MBps) Copying: 156/1024 [MB] (24 MBps) Copying: 180/1024 [MB] (24 MBps) Copying: 207/1024 [MB] (26 MBps) Copying: 233/1024 [MB] (25 MBps) Copying: 261/1024 [MB] (27 MBps) Copying: 287/1024 [MB] (26 MBps) Copying: 312/1024 [MB] (24 MBps) Copying: 338/1024 [MB] (25 MBps) Copying: 363/1024 [MB] (25 MBps) Copying: 389/1024 [MB] (25 MBps) Copying: 414/1024 [MB] (24 MBps) Copying: 439/1024 [MB] (25 MBps) Copying: 464/1024 [MB] (24 MBps) Copying: 491/1024 [MB] (26 MBps) Copying: 519/1024 [MB] (27 MBps) Copying: 545/1024 [MB] (26 MBps) Copying: 572/1024 [MB] (27 MBps) Copying: 599/1024 [MB] (26 MBps) Copying: 626/1024 [MB] (26 MBps) Copying: 652/1024 [MB] (26 MBps) Copying: 679/1024 [MB] (26 MBps) Copying: 705/1024 [MB] (26 MBps) Copying: 733/1024 [MB] (28 MBps) Copying: 761/1024 [MB] (27 MBps) Copying: 789/1024 [MB] (28 MBps) Copying: 814/1024 [MB] (24 MBps) Copying: 838/1024 [MB] (24 MBps) Copying: 863/1024 [MB] (25 MBps) Copying: 890/1024 [MB] (27 MBps) Copying: 915/1024 [MB] (24 MBps) Copying: 940/1024 [MB] (25 MBps) Copying: 965/1024 [MB] (25 MBps) Copying: 993/1024 [MB] (27 MBps) Copying: 1017/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-24 15:46:58.576382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.090 [2024-07-24 15:46:58.576496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:37.090 [2024-07-24 15:46:58.576540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:37.090 [2024-07-24 15:46:58.576563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.090 [2024-07-24 15:46:58.576617] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:37.090 [2024-07-24 15:46:58.583763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.090 [2024-07-24 15:46:58.583956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:37.090 [2024-07-24 15:46:58.584126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.106 ms 00:21:37.090 [2024-07-24 15:46:58.584314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.090 [2024-07-24 15:46:58.584738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.090 [2024-07-24 15:46:58.584909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:37.090 [2024-07-24 15:46:58.585057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:21:37.090 [2024-07-24 15:46:58.585148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.090 [2024-07-24 15:46:58.590464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.090 [2024-07-24 15:46:58.590656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:37.090 [2024-07-24 15:46:58.590792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.164 ms 00:21:37.090 [2024-07-24 15:46:58.590984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.090 [2024-07-24 15:46:58.599499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.090 [2024-07-24 15:46:58.599682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:37.090 [2024-07-24 15:46:58.599816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.427 ms 00:21:37.090 [2024-07-24 15:46:58.599960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.090 [2024-07-24 15:46:58.638890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.090 [2024-07-24 15:46:58.639112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:37.090 [2024-07-24 15:46:58.639256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.740 ms 00:21:37.090 [2024-07-24 15:46:58.639317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.090 [2024-07-24 15:46:58.660514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.090 [2024-07-24 15:46:58.660721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:37.090 [2024-07-24 15:46:58.660864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.089 ms 00:21:37.090 [2024-07-24 15:46:58.660997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.090 [2024-07-24 15:46:58.661418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.090 [2024-07-24 15:46:58.661586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:37.090 [2024-07-24 15:46:58.661732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:21:37.090 [2024-07-24 15:46:58.661878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.349 [2024-07-24 15:46:58.702453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.349 [2024-07-24 15:46:58.702681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:37.349 [2024-07-24 15:46:58.702816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.425 ms 00:21:37.349 [2024-07-24 15:46:58.702962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.349 [2024-07-24 15:46:58.741824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.349 [2024-07-24 15:46:58.742047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:37.349 [2024-07-24 15:46:58.742221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.768 ms 00:21:37.349 [2024-07-24 15:46:58.742360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.349 [2024-07-24 15:46:58.780344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.350 [2024-07-24 15:46:58.780547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:37.350 [2024-07-24 15:46:58.780682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.900 ms 00:21:37.350 [2024-07-24 15:46:58.780711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.350 [2024-07-24 15:46:58.819064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.350 [2024-07-24 15:46:58.819162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:37.350 [2024-07-24 15:46:58.819187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.236 ms 00:21:37.350 [2024-07-24 15:46:58.819200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.350 [2024-07-24 15:46:58.819237] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:37.350 [2024-07-24 15:46:58.819262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.819989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:37.350 [2024-07-24 15:46:58.820400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:37.351 [2024-07-24 15:46:58.820741] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:37.351 [2024-07-24 15:46:58.820764] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2296fe34-9ba1-4bd9-945f-27e23b547ff8 00:21:37.351 [2024-07-24 15:46:58.820778] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:37.351 [2024-07-24 15:46:58.820791] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:37.351 [2024-07-24 15:46:58.820803] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:37.351 [2024-07-24 15:46:58.820817] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:37.351 [2024-07-24 15:46:58.820829] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:37.351 [2024-07-24 15:46:58.820843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:37.351 [2024-07-24 15:46:58.820856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:37.351 [2024-07-24 15:46:58.820868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:37.351 [2024-07-24 15:46:58.820879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:37.351 [2024-07-24 15:46:58.820893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.351 [2024-07-24 15:46:58.820906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:37.351 [2024-07-24 15:46:58.820920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.657 ms 00:21:37.351 [2024-07-24 15:46:58.820948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.351 [2024-07-24 15:46:58.841740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.351 [2024-07-24 15:46:58.841798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:37.351 [2024-07-24 15:46:58.841820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.711 ms 00:21:37.351 [2024-07-24 15:46:58.841834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.351 [2024-07-24 15:46:58.842166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.351 [2024-07-24 15:46:58.842194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:37.351 [2024-07-24 15:46:58.842218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:21:37.351 [2024-07-24 15:46:58.842232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.351 [2024-07-24 15:46:58.899364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.351 [2024-07-24 15:46:58.899431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:37.351 [2024-07-24 15:46:58.899457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.351 [2024-07-24 15:46:58.899481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.351 [2024-07-24 15:46:58.899573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.351 [2024-07-24 15:46:58.899590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:37.351 [2024-07-24 15:46:58.899612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.351 [2024-07-24 15:46:58.899625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.351 [2024-07-24 15:46:58.899760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.351 [2024-07-24 15:46:58.899783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:37.351 [2024-07-24 15:46:58.899797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.351 [2024-07-24 15:46:58.899811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.351 [2024-07-24 15:46:58.899837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.351 [2024-07-24 15:46:58.899852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:37.351 [2024-07-24 15:46:58.899865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.351 [2024-07-24 15:46:58.899885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.609 [2024-07-24 15:46:59.016481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.609 [2024-07-24 15:46:59.016549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:37.609 [2024-07-24 15:46:59.016568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.609 [2024-07-24 15:46:59.016580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.609 [2024-07-24 15:46:59.056051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.609 [2024-07-24 15:46:59.056130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:37.609 [2024-07-24 15:46:59.056149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.609 [2024-07-24 15:46:59.056172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.609 [2024-07-24 15:46:59.056274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.609 [2024-07-24 15:46:59.056293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:37.609 [2024-07-24 15:46:59.056305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.609 [2024-07-24 15:46:59.056315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.609 [2024-07-24 15:46:59.056370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.609 [2024-07-24 15:46:59.056386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:37.609 [2024-07-24 15:46:59.056397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.609 [2024-07-24 15:46:59.056408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.609 [2024-07-24 15:46:59.056528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.609 [2024-07-24 15:46:59.056547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:37.609 [2024-07-24 15:46:59.056559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.609 [2024-07-24 15:46:59.056570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.609 [2024-07-24 15:46:59.056616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.609 [2024-07-24 15:46:59.056633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:37.609 [2024-07-24 15:46:59.056644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.609 [2024-07-24 15:46:59.056655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.609 [2024-07-24 15:46:59.056701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.609 [2024-07-24 15:46:59.056716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:37.609 [2024-07-24 15:46:59.056728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.609 [2024-07-24 15:46:59.056738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.609 [2024-07-24 15:46:59.056788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.609 [2024-07-24 15:46:59.056804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:37.610 [2024-07-24 15:46:59.056815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.610 [2024-07-24 15:46:59.056825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.610 [2024-07-24 15:46:59.056963] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 480.570 ms, result 0 00:21:38.546 00:21:38.546 00:21:38.804 15:47:00 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:41.331 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:41.331 15:47:02 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:41.331 [2024-07-24 15:47:02.501179] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:41.331 [2024-07-24 15:47:02.501346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74852 ] 00:21:41.331 [2024-07-24 15:47:02.670569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.331 [2024-07-24 15:47:02.891606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.897 [2024-07-24 15:47:03.205775] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:41.897 [2024-07-24 15:47:03.205853] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:41.897 [2024-07-24 15:47:03.359137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.897 [2024-07-24 15:47:03.359209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:41.897 [2024-07-24 15:47:03.359229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:41.897 [2024-07-24 15:47:03.359242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.897 [2024-07-24 15:47:03.359313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.897 [2024-07-24 15:47:03.359332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:41.897 [2024-07-24 15:47:03.359345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:21:41.897 [2024-07-24 15:47:03.359356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.897 [2024-07-24 15:47:03.359387] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:41.897 [2024-07-24 15:47:03.360337] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:41.897 [2024-07-24 15:47:03.360382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.897 [2024-07-24 15:47:03.360397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:41.897 [2024-07-24 15:47:03.360409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.002 ms 00:21:41.897 [2024-07-24 15:47:03.360420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.897 [2024-07-24 15:47:03.361561] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:41.897 [2024-07-24 15:47:03.377763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.897 [2024-07-24 15:47:03.377811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:41.897 [2024-07-24 15:47:03.377837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.203 ms 00:21:41.897 [2024-07-24 15:47:03.377848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.897 [2024-07-24 15:47:03.377920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.897 [2024-07-24 15:47:03.377939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:41.897 [2024-07-24 15:47:03.377952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:41.898 [2024-07-24 15:47:03.377963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.898 [2024-07-24 15:47:03.382408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.898 [2024-07-24 15:47:03.382456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:41.898 [2024-07-24 15:47:03.382473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.351 ms 00:21:41.898 [2024-07-24 15:47:03.382484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.898 [2024-07-24 15:47:03.382598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.898 [2024-07-24 15:47:03.382622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:41.898 [2024-07-24 15:47:03.382634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:21:41.898 [2024-07-24 15:47:03.382645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.898 [2024-07-24 15:47:03.382705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.898 [2024-07-24 15:47:03.382728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:41.898 [2024-07-24 15:47:03.382740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:41.898 [2024-07-24 15:47:03.382751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.898 [2024-07-24 15:47:03.382790] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:41.898 [2024-07-24 15:47:03.387015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.898 [2024-07-24 15:47:03.387055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:41.898 [2024-07-24 15:47:03.387071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.240 ms 00:21:41.898 [2024-07-24 15:47:03.387082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.898 [2024-07-24 15:47:03.387143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.898 [2024-07-24 15:47:03.387159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:41.898 [2024-07-24 15:47:03.387171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:41.898 [2024-07-24 15:47:03.387182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.898 [2024-07-24 15:47:03.387228] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:41.898 [2024-07-24 15:47:03.387262] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:41.898 [2024-07-24 15:47:03.387303] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:41.898 [2024-07-24 15:47:03.387323] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:41.898 [2024-07-24 15:47:03.387402] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:41.898 [2024-07-24 15:47:03.387417] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:41.898 [2024-07-24 15:47:03.387432] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:41.898 [2024-07-24 15:47:03.387446] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:41.898 [2024-07-24 15:47:03.387459] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:41.898 [2024-07-24 15:47:03.387476] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:41.898 [2024-07-24 15:47:03.387487] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:41.898 [2024-07-24 15:47:03.387497] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:41.898 [2024-07-24 15:47:03.387508] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:41.898 [2024-07-24 15:47:03.387519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.898 [2024-07-24 15:47:03.387530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:41.898 [2024-07-24 15:47:03.387542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:21:41.898 [2024-07-24 15:47:03.387553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.898 [2024-07-24 15:47:03.387625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.898 [2024-07-24 15:47:03.387639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:41.898 [2024-07-24 15:47:03.387654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:41.898 [2024-07-24 15:47:03.387665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.898 [2024-07-24 15:47:03.387754] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:41.898 [2024-07-24 15:47:03.387770] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:41.898 [2024-07-24 15:47:03.387782] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:41.898 [2024-07-24 15:47:03.387793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.898 [2024-07-24 15:47:03.387805] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:41.898 [2024-07-24 15:47:03.387815] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:41.898 [2024-07-24 15:47:03.387825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:41.898 [2024-07-24 15:47:03.387837] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:41.898 [2024-07-24 15:47:03.387848] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:41.898 [2024-07-24 15:47:03.387858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:41.898 [2024-07-24 15:47:03.387868] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:41.898 [2024-07-24 15:47:03.387878] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:41.898 [2024-07-24 15:47:03.387888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:41.898 [2024-07-24 15:47:03.387899] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:41.898 [2024-07-24 15:47:03.387909] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:41.898 [2024-07-24 15:47:03.387920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.898 [2024-07-24 15:47:03.387930] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:41.898 [2024-07-24 15:47:03.387940] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:41.898 [2024-07-24 15:47:03.387950] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.898 [2024-07-24 15:47:03.387959] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:41.898 [2024-07-24 15:47:03.387970] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:41.898 [2024-07-24 15:47:03.387994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:41.898 [2024-07-24 15:47:03.388004] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:41.898 [2024-07-24 15:47:03.388014] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:41.898 [2024-07-24 15:47:03.388024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:41.898 [2024-07-24 15:47:03.388034] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:41.898 [2024-07-24 15:47:03.388044] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:41.898 [2024-07-24 15:47:03.388054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:41.898 [2024-07-24 15:47:03.388064] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:41.898 [2024-07-24 15:47:03.388074] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:41.898 [2024-07-24 15:47:03.388292] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:41.898 [2024-07-24 15:47:03.388356] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:41.898 [2024-07-24 15:47:03.388396] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:41.898 [2024-07-24 15:47:03.388431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:41.898 [2024-07-24 15:47:03.388547] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:41.898 [2024-07-24 15:47:03.388597] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:41.898 [2024-07-24 15:47:03.388634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:41.898 [2024-07-24 15:47:03.388670] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:41.898 [2024-07-24 15:47:03.388805] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:41.898 [2024-07-24 15:47:03.388842] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:41.898 [2024-07-24 15:47:03.388944] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:41.898 [2024-07-24 15:47:03.388994] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:41.898 [2024-07-24 15:47:03.389112] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:41.898 [2024-07-24 15:47:03.389175] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.898 [2024-07-24 15:47:03.389309] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:41.898 [2024-07-24 15:47:03.389361] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:41.898 [2024-07-24 15:47:03.389398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:41.898 [2024-07-24 15:47:03.389497] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:41.898 [2024-07-24 15:47:03.389545] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:41.898 [2024-07-24 15:47:03.389582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:41.898 [2024-07-24 15:47:03.389681] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:41.898 [2024-07-24 15:47:03.389706] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:41.898 [2024-07-24 15:47:03.389720] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:41.898 [2024-07-24 15:47:03.389731] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:41.898 [2024-07-24 15:47:03.389742] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:41.898 [2024-07-24 15:47:03.389754] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:41.898 [2024-07-24 15:47:03.389765] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:41.898 [2024-07-24 15:47:03.389776] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:41.899 [2024-07-24 15:47:03.389787] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:41.899 [2024-07-24 15:47:03.389798] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:41.899 [2024-07-24 15:47:03.389809] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:41.899 [2024-07-24 15:47:03.389820] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:41.899 [2024-07-24 15:47:03.389832] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:41.899 [2024-07-24 15:47:03.389843] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:41.899 [2024-07-24 15:47:03.389855] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:41.899 [2024-07-24 15:47:03.389866] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:41.899 [2024-07-24 15:47:03.389878] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:41.899 [2024-07-24 15:47:03.389890] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:41.899 [2024-07-24 15:47:03.389902] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:41.899 [2024-07-24 15:47:03.389913] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:41.899 [2024-07-24 15:47:03.389924] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:41.899 [2024-07-24 15:47:03.389938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.899 [2024-07-24 15:47:03.389949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:41.899 [2024-07-24 15:47:03.389962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.229 ms 00:21:41.899 [2024-07-24 15:47:03.389973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.899 [2024-07-24 15:47:03.408130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.899 [2024-07-24 15:47:03.408292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:41.899 [2024-07-24 15:47:03.408434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.065 ms 00:21:41.899 [2024-07-24 15:47:03.408486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.899 [2024-07-24 15:47:03.408690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.899 [2024-07-24 15:47:03.408858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:41.899 [2024-07-24 15:47:03.408980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:41.899 [2024-07-24 15:47:03.409099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.899 [2024-07-24 15:47:03.459433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.899 [2024-07-24 15:47:03.459629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:41.899 [2024-07-24 15:47:03.459772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.200 ms 00:21:41.899 [2024-07-24 15:47:03.459900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.899 [2024-07-24 15:47:03.460015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.899 [2024-07-24 15:47:03.460139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:41.899 [2024-07-24 15:47:03.460244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:41.899 [2024-07-24 15:47:03.460293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.899 [2024-07-24 15:47:03.460785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.899 [2024-07-24 15:47:03.460927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:41.899 [2024-07-24 15:47:03.461035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:21:41.899 [2024-07-24 15:47:03.461166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.899 [2024-07-24 15:47:03.461369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.899 [2024-07-24 15:47:03.461437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:41.899 [2024-07-24 15:47:03.461542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:21:41.899 [2024-07-24 15:47:03.461591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.899 [2024-07-24 15:47:03.478424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.899 [2024-07-24 15:47:03.478594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:41.899 [2024-07-24 15:47:03.478714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.772 ms 00:21:41.899 [2024-07-24 15:47:03.478771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.158 [2024-07-24 15:47:03.495755] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:42.158 [2024-07-24 15:47:03.496014] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:42.158 [2024-07-24 15:47:03.496242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.158 [2024-07-24 15:47:03.496349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:42.158 [2024-07-24 15:47:03.496400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.270 ms 00:21:42.158 [2024-07-24 15:47:03.496501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.158 [2024-07-24 15:47:03.526462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.158 [2024-07-24 15:47:03.526663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:42.158 [2024-07-24 15:47:03.526784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.859 ms 00:21:42.158 [2024-07-24 15:47:03.526834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.158 [2024-07-24 15:47:03.542918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.158 [2024-07-24 15:47:03.543082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:42.158 [2024-07-24 15:47:03.543212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.873 ms 00:21:42.158 [2024-07-24 15:47:03.543262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.158 [2024-07-24 15:47:03.559436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.158 [2024-07-24 15:47:03.559613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:42.158 [2024-07-24 15:47:03.559745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.985 ms 00:21:42.158 [2024-07-24 15:47:03.559874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.158 [2024-07-24 15:47:03.560411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.158 [2024-07-24 15:47:03.560444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:42.158 [2024-07-24 15:47:03.560460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:21:42.158 [2024-07-24 15:47:03.560472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.158 [2024-07-24 15:47:03.637173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.158 [2024-07-24 15:47:03.637248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:42.158 [2024-07-24 15:47:03.637277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.675 ms 00:21:42.158 [2024-07-24 15:47:03.637289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.158 [2024-07-24 15:47:03.650185] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:42.158 [2024-07-24 15:47:03.652915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.158 [2024-07-24 15:47:03.652948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:42.158 [2024-07-24 15:47:03.652967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.550 ms 00:21:42.158 [2024-07-24 15:47:03.652978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.158 [2024-07-24 15:47:03.653118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.158 [2024-07-24 15:47:03.653142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:42.158 [2024-07-24 15:47:03.653155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:42.158 [2024-07-24 15:47:03.653167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.158 [2024-07-24 15:47:03.653255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.158 [2024-07-24 15:47:03.653274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:42.158 [2024-07-24 15:47:03.653286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:42.158 [2024-07-24 15:47:03.653296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.158 [2024-07-24 15:47:03.655153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.158 [2024-07-24 15:47:03.655186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:42.158 [2024-07-24 15:47:03.655204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.828 ms 00:21:42.158 [2024-07-24 15:47:03.655214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.158 [2024-07-24 15:47:03.655251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.158 [2024-07-24 15:47:03.655266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:42.158 [2024-07-24 15:47:03.655288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:42.158 [2024-07-24 15:47:03.655305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.158 [2024-07-24 15:47:03.655346] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:42.158 [2024-07-24 15:47:03.655362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.158 [2024-07-24 15:47:03.655373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:42.158 [2024-07-24 15:47:03.655384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:42.158 [2024-07-24 15:47:03.655398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.158 [2024-07-24 15:47:03.686776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.158 [2024-07-24 15:47:03.686828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:42.158 [2024-07-24 15:47:03.686847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.349 ms 00:21:42.158 [2024-07-24 15:47:03.686869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.158 [2024-07-24 15:47:03.686962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.158 [2024-07-24 15:47:03.686988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:42.158 [2024-07-24 15:47:03.687001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:42.158 [2024-07-24 15:47:03.687012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.158 [2024-07-24 15:47:03.688148] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 328.525 ms, result 0 00:22:20.694  Copying: 26/1024 [MB] (26 MBps) Copying: 53/1024 [MB] (27 MBps) Copying: 81/1024 [MB] (28 MBps) Copying: 109/1024 [MB] (27 MBps) Copying: 135/1024 [MB] (26 MBps) Copying: 162/1024 [MB] (27 MBps) Copying: 190/1024 [MB] (27 MBps) Copying: 217/1024 [MB] (27 MBps) Copying: 244/1024 [MB] (26 MBps) Copying: 271/1024 [MB] (27 MBps) Copying: 299/1024 [MB] (27 MBps) Copying: 327/1024 [MB] (28 MBps) Copying: 355/1024 [MB] (28 MBps) Copying: 383/1024 [MB] (27 MBps) Copying: 411/1024 [MB] (28 MBps) Copying: 440/1024 [MB] (28 MBps) Copying: 468/1024 [MB] (28 MBps) Copying: 496/1024 [MB] (27 MBps) Copying: 523/1024 [MB] (26 MBps) Copying: 550/1024 [MB] (27 MBps) Copying: 577/1024 [MB] (27 MBps) Copying: 606/1024 [MB] (28 MBps) Copying: 634/1024 [MB] (28 MBps) Copying: 661/1024 [MB] (26 MBps) Copying: 688/1024 [MB] (26 MBps) Copying: 715/1024 [MB] (26 MBps) Copying: 741/1024 [MB] (26 MBps) Copying: 769/1024 [MB] (27 MBps) Copying: 796/1024 [MB] (27 MBps) Copying: 824/1024 [MB] (27 MBps) Copying: 850/1024 [MB] (26 MBps) Copying: 876/1024 [MB] (26 MBps) Copying: 904/1024 [MB] (27 MBps) Copying: 932/1024 [MB] (27 MBps) Copying: 959/1024 [MB] (26 MBps) Copying: 986/1024 [MB] (27 MBps) Copying: 1014/1024 [MB] (27 MBps) Copying: 1048180/1048576 [kB] (9500 kBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-24 15:47:42.159470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.694 [2024-07-24 15:47:42.159588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:20.694 [2024-07-24 15:47:42.159614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:20.694 [2024-07-24 15:47:42.159641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.694 [2024-07-24 15:47:42.161263] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:20.694 [2024-07-24 15:47:42.169754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.694 [2024-07-24 15:47:42.169814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:20.694 [2024-07-24 15:47:42.169832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.446 ms 00:22:20.694 [2024-07-24 15:47:42.169843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.694 [2024-07-24 15:47:42.180427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.694 [2024-07-24 15:47:42.180473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:20.694 [2024-07-24 15:47:42.180491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.256 ms 00:22:20.694 [2024-07-24 15:47:42.180502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.694 [2024-07-24 15:47:42.201507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.694 [2024-07-24 15:47:42.201551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:20.694 [2024-07-24 15:47:42.201568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.974 ms 00:22:20.694 [2024-07-24 15:47:42.201579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.694 [2024-07-24 15:47:42.208319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.694 [2024-07-24 15:47:42.208367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:20.694 [2024-07-24 15:47:42.208381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.700 ms 00:22:20.694 [2024-07-24 15:47:42.208392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.694 [2024-07-24 15:47:42.239890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.694 [2024-07-24 15:47:42.239946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:20.694 [2024-07-24 15:47:42.239963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.439 ms 00:22:20.694 [2024-07-24 15:47:42.239974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.694 [2024-07-24 15:47:42.257776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.695 [2024-07-24 15:47:42.257826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:20.695 [2024-07-24 15:47:42.257843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.759 ms 00:22:20.695 [2024-07-24 15:47:42.257854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.979 [2024-07-24 15:47:42.342843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.979 [2024-07-24 15:47:42.342938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:20.979 [2024-07-24 15:47:42.342959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.936 ms 00:22:20.979 [2024-07-24 15:47:42.342971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.979 [2024-07-24 15:47:42.375039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.979 [2024-07-24 15:47:42.375105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:20.979 [2024-07-24 15:47:42.375125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.045 ms 00:22:20.979 [2024-07-24 15:47:42.375136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.979 [2024-07-24 15:47:42.406388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.979 [2024-07-24 15:47:42.406432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:20.979 [2024-07-24 15:47:42.406448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.206 ms 00:22:20.979 [2024-07-24 15:47:42.406460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.979 [2024-07-24 15:47:42.437388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.979 [2024-07-24 15:47:42.437438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:20.979 [2024-07-24 15:47:42.437456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.885 ms 00:22:20.979 [2024-07-24 15:47:42.437467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.979 [2024-07-24 15:47:42.468031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.979 [2024-07-24 15:47:42.468076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:20.979 [2024-07-24 15:47:42.468103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.453 ms 00:22:20.979 [2024-07-24 15:47:42.468115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.979 [2024-07-24 15:47:42.468158] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:20.979 [2024-07-24 15:47:42.468182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 119040 / 261120 wr_cnt: 1 state: open 00:22:20.979 [2024-07-24 15:47:42.468197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:20.979 [2024-07-24 15:47:42.468699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.468996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:20.980 [2024-07-24 15:47:42.469367] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:20.980 [2024-07-24 15:47:42.469378] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2296fe34-9ba1-4bd9-945f-27e23b547ff8 00:22:20.980 [2024-07-24 15:47:42.469390] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 119040 00:22:20.980 [2024-07-24 15:47:42.469400] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 120000 00:22:20.980 [2024-07-24 15:47:42.469410] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 119040 00:22:20.980 [2024-07-24 15:47:42.469422] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:22:20.980 [2024-07-24 15:47:42.469432] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:20.980 [2024-07-24 15:47:42.469448] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:20.980 [2024-07-24 15:47:42.469459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:20.980 [2024-07-24 15:47:42.469469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:20.980 [2024-07-24 15:47:42.469479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:20.980 [2024-07-24 15:47:42.469490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.980 [2024-07-24 15:47:42.469501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:20.980 [2024-07-24 15:47:42.469512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.333 ms 00:22:20.981 [2024-07-24 15:47:42.469536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.981 [2024-07-24 15:47:42.489655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.981 [2024-07-24 15:47:42.489738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:20.981 [2024-07-24 15:47:42.489770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.057 ms 00:22:20.981 [2024-07-24 15:47:42.489807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.981 [2024-07-24 15:47:42.490210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.981 [2024-07-24 15:47:42.490253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:20.981 [2024-07-24 15:47:42.490277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:22:20.981 [2024-07-24 15:47:42.490297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.981 [2024-07-24 15:47:42.536047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.981 [2024-07-24 15:47:42.536115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:20.981 [2024-07-24 15:47:42.536140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.981 [2024-07-24 15:47:42.536152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.981 [2024-07-24 15:47:42.536226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.981 [2024-07-24 15:47:42.536240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:20.981 [2024-07-24 15:47:42.536259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.981 [2024-07-24 15:47:42.536270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.981 [2024-07-24 15:47:42.536376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.981 [2024-07-24 15:47:42.536395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:20.981 [2024-07-24 15:47:42.536407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.981 [2024-07-24 15:47:42.536424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.981 [2024-07-24 15:47:42.536447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.981 [2024-07-24 15:47:42.536460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:20.981 [2024-07-24 15:47:42.536471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.981 [2024-07-24 15:47:42.536481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.239 [2024-07-24 15:47:42.638348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.239 [2024-07-24 15:47:42.638403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:21.239 [2024-07-24 15:47:42.638429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.239 [2024-07-24 15:47:42.638441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.239 [2024-07-24 15:47:42.677188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.239 [2024-07-24 15:47:42.677238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:21.239 [2024-07-24 15:47:42.677255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.239 [2024-07-24 15:47:42.677266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.239 [2024-07-24 15:47:42.677361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.239 [2024-07-24 15:47:42.677379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:21.239 [2024-07-24 15:47:42.677390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.239 [2024-07-24 15:47:42.677401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.239 [2024-07-24 15:47:42.677465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.239 [2024-07-24 15:47:42.677482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:21.239 [2024-07-24 15:47:42.677494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.239 [2024-07-24 15:47:42.677504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.239 [2024-07-24 15:47:42.677633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.239 [2024-07-24 15:47:42.677654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:21.239 [2024-07-24 15:47:42.677666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.239 [2024-07-24 15:47:42.677677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.239 [2024-07-24 15:47:42.677734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.239 [2024-07-24 15:47:42.677752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:21.239 [2024-07-24 15:47:42.677763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.239 [2024-07-24 15:47:42.677773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.239 [2024-07-24 15:47:42.677815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.239 [2024-07-24 15:47:42.677829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:21.239 [2024-07-24 15:47:42.677840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.239 [2024-07-24 15:47:42.677851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.239 [2024-07-24 15:47:42.677905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.239 [2024-07-24 15:47:42.677922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:21.239 [2024-07-24 15:47:42.677933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.239 [2024-07-24 15:47:42.677944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.239 [2024-07-24 15:47:42.678082] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 520.306 ms, result 0 00:22:23.140 00:22:23.140 00:22:23.140 15:47:44 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:23.140 [2024-07-24 15:47:44.320637] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:23.140 [2024-07-24 15:47:44.320836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75274 ] 00:22:23.140 [2024-07-24 15:47:44.493951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.140 [2024-07-24 15:47:44.721743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.708 [2024-07-24 15:47:45.032318] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:23.708 [2024-07-24 15:47:45.032413] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:23.708 [2024-07-24 15:47:45.187300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.708 [2024-07-24 15:47:45.187368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:23.708 [2024-07-24 15:47:45.187390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:23.708 [2024-07-24 15:47:45.187403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.708 [2024-07-24 15:47:45.187472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.708 [2024-07-24 15:47:45.187491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:23.708 [2024-07-24 15:47:45.187504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:23.708 [2024-07-24 15:47:45.187516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.708 [2024-07-24 15:47:45.187547] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:23.708 [2024-07-24 15:47:45.188485] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:23.708 [2024-07-24 15:47:45.188525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.708 [2024-07-24 15:47:45.188540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:23.708 [2024-07-24 15:47:45.188553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:22:23.708 [2024-07-24 15:47:45.188565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.708 [2024-07-24 15:47:45.189727] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:23.709 [2024-07-24 15:47:45.205965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.709 [2024-07-24 15:47:45.206010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:23.709 [2024-07-24 15:47:45.206035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.240 ms 00:22:23.709 [2024-07-24 15:47:45.206048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.709 [2024-07-24 15:47:45.206142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.709 [2024-07-24 15:47:45.206163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:23.709 [2024-07-24 15:47:45.206177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:23.709 [2024-07-24 15:47:45.206188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.709 [2024-07-24 15:47:45.210689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.709 [2024-07-24 15:47:45.210739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:23.709 [2024-07-24 15:47:45.210756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.386 ms 00:22:23.709 [2024-07-24 15:47:45.210768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.709 [2024-07-24 15:47:45.210924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.709 [2024-07-24 15:47:45.210947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:23.709 [2024-07-24 15:47:45.210961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:22:23.709 [2024-07-24 15:47:45.210972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.709 [2024-07-24 15:47:45.211035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.709 [2024-07-24 15:47:45.211058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:23.709 [2024-07-24 15:47:45.211071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:23.709 [2024-07-24 15:47:45.211083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.709 [2024-07-24 15:47:45.211141] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:23.709 [2024-07-24 15:47:45.215408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.709 [2024-07-24 15:47:45.215445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:23.709 [2024-07-24 15:47:45.215461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.280 ms 00:22:23.709 [2024-07-24 15:47:45.215474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.709 [2024-07-24 15:47:45.215523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.709 [2024-07-24 15:47:45.215540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:23.709 [2024-07-24 15:47:45.215553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:23.709 [2024-07-24 15:47:45.215564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.709 [2024-07-24 15:47:45.215607] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:23.709 [2024-07-24 15:47:45.215640] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:22:23.709 [2024-07-24 15:47:45.215684] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:23.709 [2024-07-24 15:47:45.215705] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:22:23.709 [2024-07-24 15:47:45.215800] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:23.709 [2024-07-24 15:47:45.215818] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:23.709 [2024-07-24 15:47:45.215832] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:23.709 [2024-07-24 15:47:45.215847] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:23.709 [2024-07-24 15:47:45.215872] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:23.709 [2024-07-24 15:47:45.215890] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:23.709 [2024-07-24 15:47:45.215902] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:23.709 [2024-07-24 15:47:45.215913] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:23.709 [2024-07-24 15:47:45.215924] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:23.709 [2024-07-24 15:47:45.215936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.709 [2024-07-24 15:47:45.215948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:23.709 [2024-07-24 15:47:45.215960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:22:23.709 [2024-07-24 15:47:45.215972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.709 [2024-07-24 15:47:45.216044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.709 [2024-07-24 15:47:45.216058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:23.709 [2024-07-24 15:47:45.216074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:23.709 [2024-07-24 15:47:45.216112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.709 [2024-07-24 15:47:45.216232] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:23.709 [2024-07-24 15:47:45.216251] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:23.709 [2024-07-24 15:47:45.216264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:23.709 [2024-07-24 15:47:45.216276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:23.709 [2024-07-24 15:47:45.216288] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:23.709 [2024-07-24 15:47:45.216300] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:23.709 [2024-07-24 15:47:45.216311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:23.709 [2024-07-24 15:47:45.216322] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:23.709 [2024-07-24 15:47:45.216334] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:23.709 [2024-07-24 15:47:45.216345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:23.709 [2024-07-24 15:47:45.216356] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:23.709 [2024-07-24 15:47:45.216367] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:23.709 [2024-07-24 15:47:45.216379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:23.709 [2024-07-24 15:47:45.216390] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:23.709 [2024-07-24 15:47:45.216401] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:23.709 [2024-07-24 15:47:45.216412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:23.709 [2024-07-24 15:47:45.216423] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:23.709 [2024-07-24 15:47:45.216434] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:23.709 [2024-07-24 15:47:45.216446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:23.709 [2024-07-24 15:47:45.216456] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:23.709 [2024-07-24 15:47:45.216467] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:23.709 [2024-07-24 15:47:45.216495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:23.709 [2024-07-24 15:47:45.216507] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:23.709 [2024-07-24 15:47:45.216518] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:23.709 [2024-07-24 15:47:45.216531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:23.709 [2024-07-24 15:47:45.216543] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:23.709 [2024-07-24 15:47:45.216554] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:23.709 [2024-07-24 15:47:45.216565] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:23.709 [2024-07-24 15:47:45.216576] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:23.709 [2024-07-24 15:47:45.216587] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:23.709 [2024-07-24 15:47:45.216598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:23.709 [2024-07-24 15:47:45.216609] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:23.709 [2024-07-24 15:47:45.216620] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:23.709 [2024-07-24 15:47:45.216631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:23.709 [2024-07-24 15:47:45.216642] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:23.709 [2024-07-24 15:47:45.216652] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:23.709 [2024-07-24 15:47:45.216663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:23.709 [2024-07-24 15:47:45.216675] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:23.709 [2024-07-24 15:47:45.216686] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:23.709 [2024-07-24 15:47:45.216697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:23.709 [2024-07-24 15:47:45.216708] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:23.709 [2024-07-24 15:47:45.216719] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:23.709 [2024-07-24 15:47:45.216731] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:23.709 [2024-07-24 15:47:45.216747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:23.709 [2024-07-24 15:47:45.216760] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:23.709 [2024-07-24 15:47:45.216771] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:23.709 [2024-07-24 15:47:45.216782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:23.709 [2024-07-24 15:47:45.216793] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:23.709 [2024-07-24 15:47:45.216804] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:23.709 [2024-07-24 15:47:45.216815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:23.709 [2024-07-24 15:47:45.216828] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:23.709 [2024-07-24 15:47:45.216842] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:23.710 [2024-07-24 15:47:45.216855] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:23.710 [2024-07-24 15:47:45.216867] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:23.710 [2024-07-24 15:47:45.216879] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:23.710 [2024-07-24 15:47:45.216891] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:23.710 [2024-07-24 15:47:45.216904] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:23.710 [2024-07-24 15:47:45.216916] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:23.710 [2024-07-24 15:47:45.216928] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:23.710 [2024-07-24 15:47:45.216940] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:23.710 [2024-07-24 15:47:45.216951] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:23.710 [2024-07-24 15:47:45.216963] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:23.710 [2024-07-24 15:47:45.216974] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:23.710 [2024-07-24 15:47:45.216986] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:23.710 [2024-07-24 15:47:45.216998] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:23.710 [2024-07-24 15:47:45.217010] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:23.710 [2024-07-24 15:47:45.217022] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:23.710 [2024-07-24 15:47:45.217035] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:23.710 [2024-07-24 15:47:45.217047] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:23.710 [2024-07-24 15:47:45.217059] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:23.710 [2024-07-24 15:47:45.217070] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:23.710 [2024-07-24 15:47:45.217083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.710 [2024-07-24 15:47:45.217112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:23.710 [2024-07-24 15:47:45.217126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.897 ms 00:22:23.710 [2024-07-24 15:47:45.217138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.710 [2024-07-24 15:47:45.235294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.710 [2024-07-24 15:47:45.235350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:23.710 [2024-07-24 15:47:45.235371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.099 ms 00:22:23.710 [2024-07-24 15:47:45.235383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.710 [2024-07-24 15:47:45.235499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.710 [2024-07-24 15:47:45.235521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:23.710 [2024-07-24 15:47:45.235534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:23.710 [2024-07-24 15:47:45.235546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.710 [2024-07-24 15:47:45.287269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.710 [2024-07-24 15:47:45.287336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:23.710 [2024-07-24 15:47:45.287357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.635 ms 00:22:23.710 [2024-07-24 15:47:45.287376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.710 [2024-07-24 15:47:45.287477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.710 [2024-07-24 15:47:45.287504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:23.710 [2024-07-24 15:47:45.287526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:23.710 [2024-07-24 15:47:45.287545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.710 [2024-07-24 15:47:45.287968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.710 [2024-07-24 15:47:45.288000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:23.710 [2024-07-24 15:47:45.288014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:22:23.710 [2024-07-24 15:47:45.288026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.710 [2024-07-24 15:47:45.288198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.710 [2024-07-24 15:47:45.288219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:23.710 [2024-07-24 15:47:45.288233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:22:23.710 [2024-07-24 15:47:45.288244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.970 [2024-07-24 15:47:45.305170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.970 [2024-07-24 15:47:45.305227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:23.970 [2024-07-24 15:47:45.305247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.895 ms 00:22:23.970 [2024-07-24 15:47:45.305260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.970 [2024-07-24 15:47:45.321573] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:23.970 [2024-07-24 15:47:45.321620] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:23.970 [2024-07-24 15:47:45.321638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.970 [2024-07-24 15:47:45.321650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:23.970 [2024-07-24 15:47:45.321664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.223 ms 00:22:23.970 [2024-07-24 15:47:45.321676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.970 [2024-07-24 15:47:45.351710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.970 [2024-07-24 15:47:45.351773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:23.970 [2024-07-24 15:47:45.351793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.978 ms 00:22:23.970 [2024-07-24 15:47:45.351806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.970 [2024-07-24 15:47:45.367711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.970 [2024-07-24 15:47:45.367785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:23.970 [2024-07-24 15:47:45.367804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.827 ms 00:22:23.970 [2024-07-24 15:47:45.367817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.970 [2024-07-24 15:47:45.383641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.970 [2024-07-24 15:47:45.383690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:23.970 [2024-07-24 15:47:45.383708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.767 ms 00:22:23.970 [2024-07-24 15:47:45.383720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.970 [2024-07-24 15:47:45.384232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.970 [2024-07-24 15:47:45.384268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:23.970 [2024-07-24 15:47:45.384284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:22:23.970 [2024-07-24 15:47:45.384296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.970 [2024-07-24 15:47:45.461472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.970 [2024-07-24 15:47:45.461538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:23.970 [2024-07-24 15:47:45.461560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.149 ms 00:22:23.970 [2024-07-24 15:47:45.461573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.970 [2024-07-24 15:47:45.475393] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:23.970 [2024-07-24 15:47:45.478876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.970 [2024-07-24 15:47:45.478936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:23.970 [2024-07-24 15:47:45.478967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.202 ms 00:22:23.970 [2024-07-24 15:47:45.478991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.970 [2024-07-24 15:47:45.479173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.970 [2024-07-24 15:47:45.479224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:23.970 [2024-07-24 15:47:45.479249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:23.970 [2024-07-24 15:47:45.479270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.970 [2024-07-24 15:47:45.480833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.970 [2024-07-24 15:47:45.480896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:23.970 [2024-07-24 15:47:45.480926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.475 ms 00:22:23.970 [2024-07-24 15:47:45.480948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.970 [2024-07-24 15:47:45.483448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.970 [2024-07-24 15:47:45.483518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:23.970 [2024-07-24 15:47:45.483558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.443 ms 00:22:23.970 [2024-07-24 15:47:45.483580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.970 [2024-07-24 15:47:45.483642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.970 [2024-07-24 15:47:45.483667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:23.970 [2024-07-24 15:47:45.483688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:23.970 [2024-07-24 15:47:45.483720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.970 [2024-07-24 15:47:45.483818] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:23.970 [2024-07-24 15:47:45.483847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.970 [2024-07-24 15:47:45.483868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:23.970 [2024-07-24 15:47:45.483890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:23.970 [2024-07-24 15:47:45.483918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.970 [2024-07-24 15:47:45.531270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.970 [2024-07-24 15:47:45.531367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:23.970 [2024-07-24 15:47:45.531403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.296 ms 00:22:23.970 [2024-07-24 15:47:45.531427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.970 [2024-07-24 15:47:45.531574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.970 [2024-07-24 15:47:45.531618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:23.970 [2024-07-24 15:47:45.531642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:23.970 [2024-07-24 15:47:45.531663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.970 [2024-07-24 15:47:45.541289] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 351.519 ms, result 0 00:23:03.868  Copying: 24/1024 [MB] (24 MBps) Copying: 50/1024 [MB] (25 MBps) Copying: 77/1024 [MB] (27 MBps) Copying: 104/1024 [MB] (27 MBps) Copying: 132/1024 [MB] (27 MBps) Copying: 156/1024 [MB] (24 MBps) Copying: 182/1024 [MB] (25 MBps) Copying: 205/1024 [MB] (23 MBps) Copying: 232/1024 [MB] (26 MBps) Copying: 257/1024 [MB] (25 MBps) Copying: 284/1024 [MB] (27 MBps) Copying: 312/1024 [MB] (27 MBps) Copying: 338/1024 [MB] (26 MBps) Copying: 363/1024 [MB] (24 MBps) Copying: 387/1024 [MB] (24 MBps) Copying: 411/1024 [MB] (24 MBps) Copying: 437/1024 [MB] (25 MBps) Copying: 461/1024 [MB] (23 MBps) Copying: 487/1024 [MB] (25 MBps) Copying: 513/1024 [MB] (26 MBps) Copying: 540/1024 [MB] (26 MBps) Copying: 562/1024 [MB] (22 MBps) Copying: 588/1024 [MB] (25 MBps) Copying: 616/1024 [MB] (27 MBps) Copying: 643/1024 [MB] (27 MBps) Copying: 671/1024 [MB] (27 MBps) Copying: 697/1024 [MB] (26 MBps) Copying: 724/1024 [MB] (26 MBps) Copying: 751/1024 [MB] (27 MBps) Copying: 777/1024 [MB] (26 MBps) Copying: 803/1024 [MB] (26 MBps) Copying: 831/1024 [MB] (27 MBps) Copying: 858/1024 [MB] (27 MBps) Copying: 883/1024 [MB] (25 MBps) Copying: 909/1024 [MB] (26 MBps) Copying: 935/1024 [MB] (25 MBps) Copying: 961/1024 [MB] (26 MBps) Copying: 988/1024 [MB] (26 MBps) Copying: 1014/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-24 15:48:25.213660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.868 [2024-07-24 15:48:25.213751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:03.868 [2024-07-24 15:48:25.213779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:03.868 [2024-07-24 15:48:25.213796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.868 [2024-07-24 15:48:25.213854] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:03.868 [2024-07-24 15:48:25.219252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.868 [2024-07-24 15:48:25.219313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:03.868 [2024-07-24 15:48:25.219332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.367 ms 00:23:03.868 [2024-07-24 15:48:25.219345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.868 [2024-07-24 15:48:25.219629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.868 [2024-07-24 15:48:25.219660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:03.868 [2024-07-24 15:48:25.219676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:23:03.868 [2024-07-24 15:48:25.219688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.868 [2024-07-24 15:48:25.224127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.868 [2024-07-24 15:48:25.224169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:03.868 [2024-07-24 15:48:25.224187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.410 ms 00:23:03.868 [2024-07-24 15:48:25.224199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.868 [2024-07-24 15:48:25.231042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.868 [2024-07-24 15:48:25.231098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:03.868 [2024-07-24 15:48:25.231116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.799 ms 00:23:03.868 [2024-07-24 15:48:25.231128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.868 [2024-07-24 15:48:25.262717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.868 [2024-07-24 15:48:25.262780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:03.868 [2024-07-24 15:48:25.262801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.516 ms 00:23:03.868 [2024-07-24 15:48:25.262814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.868 [2024-07-24 15:48:25.280517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.868 [2024-07-24 15:48:25.280575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:03.868 [2024-07-24 15:48:25.280595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.644 ms 00:23:03.868 [2024-07-24 15:48:25.280608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.868 [2024-07-24 15:48:25.379197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.868 [2024-07-24 15:48:25.379257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:03.868 [2024-07-24 15:48:25.379278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.534 ms 00:23:03.868 [2024-07-24 15:48:25.379291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.868 [2024-07-24 15:48:25.411166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.868 [2024-07-24 15:48:25.411243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:03.868 [2024-07-24 15:48:25.411270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.851 ms 00:23:03.868 [2024-07-24 15:48:25.411283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.868 [2024-07-24 15:48:25.442590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.868 [2024-07-24 15:48:25.442653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:03.868 [2024-07-24 15:48:25.442673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.253 ms 00:23:03.868 [2024-07-24 15:48:25.442686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.128 [2024-07-24 15:48:25.473565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.128 [2024-07-24 15:48:25.473630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:04.128 [2024-07-24 15:48:25.473650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.826 ms 00:23:04.128 [2024-07-24 15:48:25.473662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.128 [2024-07-24 15:48:25.504666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.128 [2024-07-24 15:48:25.504725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:04.128 [2024-07-24 15:48:25.504745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.864 ms 00:23:04.128 [2024-07-24 15:48:25.504757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.128 [2024-07-24 15:48:25.504807] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:04.128 [2024-07-24 15:48:25.504832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:23:04.128 [2024-07-24 15:48:25.504847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.504859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.504871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.504893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.504905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.504917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.504928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.504940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.504952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.504964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.504976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.504988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.505000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.505012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.505023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.505035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.505047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.505059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.505071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.505083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.505118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.505130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.505143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.505154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.505166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.505178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.505190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.505201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.505213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:04.128 [2024-07-24 15:48:25.505225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.505989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.506001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.506018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.506036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.506049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.506061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:04.129 [2024-07-24 15:48:25.506082] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:04.129 [2024-07-24 15:48:25.506107] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2296fe34-9ba1-4bd9-945f-27e23b547ff8 00:23:04.129 [2024-07-24 15:48:25.506119] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:23:04.129 [2024-07-24 15:48:25.506130] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 15808 00:23:04.129 [2024-07-24 15:48:25.506141] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 14848 00:23:04.129 [2024-07-24 15:48:25.506153] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0647 00:23:04.129 [2024-07-24 15:48:25.506165] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:04.129 [2024-07-24 15:48:25.506176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:04.129 [2024-07-24 15:48:25.506196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:04.129 [2024-07-24 15:48:25.506206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:04.129 [2024-07-24 15:48:25.506216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:04.129 [2024-07-24 15:48:25.506228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.129 [2024-07-24 15:48:25.506239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:04.129 [2024-07-24 15:48:25.506251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.422 ms 00:23:04.129 [2024-07-24 15:48:25.506263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.129 [2024-07-24 15:48:25.522776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.129 [2024-07-24 15:48:25.522817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:04.129 [2024-07-24 15:48:25.522842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.434 ms 00:23:04.129 [2024-07-24 15:48:25.522856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.129 [2024-07-24 15:48:25.523126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.129 [2024-07-24 15:48:25.523152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:04.129 [2024-07-24 15:48:25.523166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:23:04.130 [2024-07-24 15:48:25.523178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.130 [2024-07-24 15:48:25.569008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.130 [2024-07-24 15:48:25.569076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:04.130 [2024-07-24 15:48:25.569113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.130 [2024-07-24 15:48:25.569125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.130 [2024-07-24 15:48:25.569205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.130 [2024-07-24 15:48:25.569220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:04.130 [2024-07-24 15:48:25.569233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.130 [2024-07-24 15:48:25.569245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.130 [2024-07-24 15:48:25.569358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.130 [2024-07-24 15:48:25.569377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:04.130 [2024-07-24 15:48:25.569390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.130 [2024-07-24 15:48:25.569408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.130 [2024-07-24 15:48:25.569431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.130 [2024-07-24 15:48:25.569445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:04.130 [2024-07-24 15:48:25.569457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.130 [2024-07-24 15:48:25.569468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.130 [2024-07-24 15:48:25.668497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.130 [2024-07-24 15:48:25.668566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:04.130 [2024-07-24 15:48:25.668594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.130 [2024-07-24 15:48:25.668607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.130 [2024-07-24 15:48:25.707902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.130 [2024-07-24 15:48:25.707967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:04.130 [2024-07-24 15:48:25.707987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.130 [2024-07-24 15:48:25.707999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.130 [2024-07-24 15:48:25.708129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.130 [2024-07-24 15:48:25.708150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:04.130 [2024-07-24 15:48:25.708163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.130 [2024-07-24 15:48:25.708175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.130 [2024-07-24 15:48:25.708245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.130 [2024-07-24 15:48:25.708262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:04.130 [2024-07-24 15:48:25.708274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.130 [2024-07-24 15:48:25.708286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.130 [2024-07-24 15:48:25.708406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.130 [2024-07-24 15:48:25.708436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:04.130 [2024-07-24 15:48:25.708450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.130 [2024-07-24 15:48:25.708462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.130 [2024-07-24 15:48:25.708523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.130 [2024-07-24 15:48:25.708541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:04.130 [2024-07-24 15:48:25.708554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.130 [2024-07-24 15:48:25.708565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.130 [2024-07-24 15:48:25.708608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.130 [2024-07-24 15:48:25.708624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:04.130 [2024-07-24 15:48:25.708636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.130 [2024-07-24 15:48:25.708647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.130 [2024-07-24 15:48:25.708708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.130 [2024-07-24 15:48:25.708735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:04.130 [2024-07-24 15:48:25.708749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.130 [2024-07-24 15:48:25.708761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.130 [2024-07-24 15:48:25.708904] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 495.216 ms, result 0 00:23:05.506 00:23:05.506 00:23:05.506 15:48:26 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:08.037 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:08.037 15:48:29 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:08.037 15:48:29 -- ftl/restore.sh@85 -- # restore_kill 00:23:08.037 15:48:29 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:08.037 15:48:29 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:08.037 15:48:29 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:08.037 15:48:29 -- ftl/restore.sh@32 -- # killprocess 73722 00:23:08.037 15:48:29 -- common/autotest_common.sh@926 -- # '[' -z 73722 ']' 00:23:08.037 15:48:29 -- common/autotest_common.sh@930 -- # kill -0 73722 00:23:08.037 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (73722) - No such process 00:23:08.037 Process with pid 73722 is not found 00:23:08.037 15:48:29 -- common/autotest_common.sh@953 -- # echo 'Process with pid 73722 is not found' 00:23:08.037 Remove shared memory files 00:23:08.037 15:48:29 -- ftl/restore.sh@33 -- # remove_shm 00:23:08.037 15:48:29 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:08.037 15:48:29 -- ftl/common.sh@205 -- # rm -f rm -f 00:23:08.037 15:48:29 -- ftl/common.sh@206 -- # rm -f rm -f 00:23:08.037 15:48:29 -- ftl/common.sh@207 -- # rm -f rm -f 00:23:08.037 15:48:29 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:08.037 15:48:29 -- ftl/common.sh@209 -- # rm -f rm -f 00:23:08.037 00:23:08.037 real 3m13.801s 00:23:08.037 user 2m59.261s 00:23:08.037 sys 0m16.829s 00:23:08.037 15:48:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:08.037 15:48:29 -- common/autotest_common.sh@10 -- # set +x 00:23:08.037 ************************************ 00:23:08.037 END TEST ftl_restore 00:23:08.037 ************************************ 00:23:08.037 15:48:29 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:23:08.037 15:48:29 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:08.037 15:48:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:08.037 15:48:29 -- common/autotest_common.sh@10 -- # set +x 00:23:08.037 ************************************ 00:23:08.037 START TEST ftl_dirty_shutdown 00:23:08.037 ************************************ 00:23:08.037 15:48:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:23:08.037 * Looking for test storage... 00:23:08.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:08.037 15:48:29 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:08.037 15:48:29 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:08.037 15:48:29 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:08.037 15:48:29 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:08.037 15:48:29 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:08.037 15:48:29 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:08.037 15:48:29 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:08.037 15:48:29 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:08.037 15:48:29 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:08.037 15:48:29 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:08.037 15:48:29 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:08.037 15:48:29 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:08.037 15:48:29 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:08.037 15:48:29 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:08.037 15:48:29 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:08.037 15:48:29 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:08.037 15:48:29 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:08.037 15:48:29 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:08.037 15:48:29 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:08.037 15:48:29 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:08.037 15:48:29 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:08.037 15:48:29 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:08.037 15:48:29 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:08.037 15:48:29 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:08.037 15:48:29 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:08.037 15:48:29 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:08.037 15:48:29 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:08.037 15:48:29 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:08.037 15:48:29 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:08.037 15:48:29 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:08.037 15:48:29 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:08.038 15:48:29 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:08.038 15:48:29 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:08.038 15:48:29 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:23:08.038 15:48:29 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:08.038 15:48:29 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:08.038 15:48:29 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:23:08.038 15:48:29 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:08.038 15:48:29 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:08.038 15:48:29 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:08.038 15:48:29 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:08.038 15:48:29 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:08.038 15:48:29 -- ftl/dirty_shutdown.sh@45 -- # svcpid=75782 00:23:08.038 15:48:29 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 75782 00:23:08.038 15:48:29 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:08.038 15:48:29 -- common/autotest_common.sh@819 -- # '[' -z 75782 ']' 00:23:08.038 15:48:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.038 15:48:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:08.038 15:48:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.038 15:48:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:08.038 15:48:29 -- common/autotest_common.sh@10 -- # set +x 00:23:08.038 [2024-07-24 15:48:29.545960] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:08.038 [2024-07-24 15:48:29.546139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75782 ] 00:23:08.296 [2024-07-24 15:48:29.718582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.554 [2024-07-24 15:48:29.906442] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:08.554 [2024-07-24 15:48:29.906668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.930 15:48:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:09.930 15:48:31 -- common/autotest_common.sh@852 -- # return 0 00:23:09.930 15:48:31 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:23:09.930 15:48:31 -- ftl/common.sh@54 -- # local name=nvme0 00:23:09.930 15:48:31 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:23:09.930 15:48:31 -- ftl/common.sh@56 -- # local size=103424 00:23:09.930 15:48:31 -- ftl/common.sh@59 -- # local base_bdev 00:23:09.930 15:48:31 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:23:10.189 15:48:31 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:10.189 15:48:31 -- ftl/common.sh@62 -- # local base_size 00:23:10.189 15:48:31 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:10.189 15:48:31 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:23:10.189 15:48:31 -- common/autotest_common.sh@1358 -- # local bdev_info 00:23:10.189 15:48:31 -- common/autotest_common.sh@1359 -- # local bs 00:23:10.189 15:48:31 -- common/autotest_common.sh@1360 -- # local nb 00:23:10.189 15:48:31 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:10.462 15:48:31 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:23:10.462 { 00:23:10.462 "name": "nvme0n1", 00:23:10.462 "aliases": [ 00:23:10.462 "d6768ac4-0597-473b-9091-caff096bf2e7" 00:23:10.462 ], 00:23:10.462 "product_name": "NVMe disk", 00:23:10.462 "block_size": 4096, 00:23:10.462 "num_blocks": 1310720, 00:23:10.462 "uuid": "d6768ac4-0597-473b-9091-caff096bf2e7", 00:23:10.462 "assigned_rate_limits": { 00:23:10.462 "rw_ios_per_sec": 0, 00:23:10.462 "rw_mbytes_per_sec": 0, 00:23:10.462 "r_mbytes_per_sec": 0, 00:23:10.462 "w_mbytes_per_sec": 0 00:23:10.462 }, 00:23:10.462 "claimed": true, 00:23:10.462 "claim_type": "read_many_write_one", 00:23:10.462 "zoned": false, 00:23:10.462 "supported_io_types": { 00:23:10.462 "read": true, 00:23:10.462 "write": true, 00:23:10.462 "unmap": true, 00:23:10.462 "write_zeroes": true, 00:23:10.462 "flush": true, 00:23:10.462 "reset": true, 00:23:10.462 "compare": true, 00:23:10.462 "compare_and_write": false, 00:23:10.462 "abort": true, 00:23:10.462 "nvme_admin": true, 00:23:10.462 "nvme_io": true 00:23:10.462 }, 00:23:10.462 "driver_specific": { 00:23:10.462 "nvme": [ 00:23:10.462 { 00:23:10.462 "pci_address": "0000:00:07.0", 00:23:10.462 "trid": { 00:23:10.462 "trtype": "PCIe", 00:23:10.462 "traddr": "0000:00:07.0" 00:23:10.462 }, 00:23:10.462 "ctrlr_data": { 00:23:10.462 "cntlid": 0, 00:23:10.462 "vendor_id": "0x1b36", 00:23:10.462 "model_number": "QEMU NVMe Ctrl", 00:23:10.462 "serial_number": "12341", 00:23:10.462 "firmware_revision": "8.0.0", 00:23:10.462 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:10.462 "oacs": { 00:23:10.462 "security": 0, 00:23:10.462 "format": 1, 00:23:10.462 "firmware": 0, 00:23:10.462 "ns_manage": 1 00:23:10.462 }, 00:23:10.462 "multi_ctrlr": false, 00:23:10.462 "ana_reporting": false 00:23:10.462 }, 00:23:10.462 "vs": { 00:23:10.462 "nvme_version": "1.4" 00:23:10.462 }, 00:23:10.462 "ns_data": { 00:23:10.462 "id": 1, 00:23:10.462 "can_share": false 00:23:10.462 } 00:23:10.462 } 00:23:10.462 ], 00:23:10.462 "mp_policy": "active_passive" 00:23:10.462 } 00:23:10.462 } 00:23:10.462 ]' 00:23:10.462 15:48:31 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:23:10.462 15:48:31 -- common/autotest_common.sh@1362 -- # bs=4096 00:23:10.462 15:48:31 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:23:10.462 15:48:31 -- common/autotest_common.sh@1363 -- # nb=1310720 00:23:10.462 15:48:31 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:23:10.462 15:48:31 -- common/autotest_common.sh@1367 -- # echo 5120 00:23:10.462 15:48:31 -- ftl/common.sh@63 -- # base_size=5120 00:23:10.462 15:48:31 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:10.462 15:48:31 -- ftl/common.sh@67 -- # clear_lvols 00:23:10.462 15:48:31 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:10.462 15:48:31 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:10.723 15:48:32 -- ftl/common.sh@28 -- # stores=7f53684c-d94e-4261-b991-e90de349ea44 00:23:10.723 15:48:32 -- ftl/common.sh@29 -- # for lvs in $stores 00:23:10.723 15:48:32 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7f53684c-d94e-4261-b991-e90de349ea44 00:23:10.980 15:48:32 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:11.238 15:48:32 -- ftl/common.sh@68 -- # lvs=b3157fe7-3b40-4664-8423-1ce1aa7c9996 00:23:11.238 15:48:32 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b3157fe7-3b40-4664-8423-1ce1aa7c9996 00:23:11.497 15:48:32 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=65557cc8-12c3-4358-8cb3-4ac864cc09eb 00:23:11.497 15:48:32 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:23:11.497 15:48:32 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 65557cc8-12c3-4358-8cb3-4ac864cc09eb 00:23:11.497 15:48:32 -- ftl/common.sh@35 -- # local name=nvc0 00:23:11.497 15:48:32 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:23:11.497 15:48:32 -- ftl/common.sh@37 -- # local base_bdev=65557cc8-12c3-4358-8cb3-4ac864cc09eb 00:23:11.497 15:48:32 -- ftl/common.sh@38 -- # local cache_size= 00:23:11.497 15:48:32 -- ftl/common.sh@41 -- # get_bdev_size 65557cc8-12c3-4358-8cb3-4ac864cc09eb 00:23:11.497 15:48:32 -- common/autotest_common.sh@1357 -- # local bdev_name=65557cc8-12c3-4358-8cb3-4ac864cc09eb 00:23:11.497 15:48:32 -- common/autotest_common.sh@1358 -- # local bdev_info 00:23:11.497 15:48:32 -- common/autotest_common.sh@1359 -- # local bs 00:23:11.497 15:48:32 -- common/autotest_common.sh@1360 -- # local nb 00:23:11.497 15:48:32 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 65557cc8-12c3-4358-8cb3-4ac864cc09eb 00:23:11.756 15:48:33 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:23:11.756 { 00:23:11.756 "name": "65557cc8-12c3-4358-8cb3-4ac864cc09eb", 00:23:11.756 "aliases": [ 00:23:11.756 "lvs/nvme0n1p0" 00:23:11.756 ], 00:23:11.756 "product_name": "Logical Volume", 00:23:11.756 "block_size": 4096, 00:23:11.756 "num_blocks": 26476544, 00:23:11.756 "uuid": "65557cc8-12c3-4358-8cb3-4ac864cc09eb", 00:23:11.756 "assigned_rate_limits": { 00:23:11.756 "rw_ios_per_sec": 0, 00:23:11.756 "rw_mbytes_per_sec": 0, 00:23:11.756 "r_mbytes_per_sec": 0, 00:23:11.756 "w_mbytes_per_sec": 0 00:23:11.756 }, 00:23:11.756 "claimed": false, 00:23:11.756 "zoned": false, 00:23:11.756 "supported_io_types": { 00:23:11.756 "read": true, 00:23:11.756 "write": true, 00:23:11.756 "unmap": true, 00:23:11.756 "write_zeroes": true, 00:23:11.756 "flush": false, 00:23:11.756 "reset": true, 00:23:11.756 "compare": false, 00:23:11.756 "compare_and_write": false, 00:23:11.756 "abort": false, 00:23:11.756 "nvme_admin": false, 00:23:11.756 "nvme_io": false 00:23:11.756 }, 00:23:11.756 "driver_specific": { 00:23:11.756 "lvol": { 00:23:11.756 "lvol_store_uuid": "b3157fe7-3b40-4664-8423-1ce1aa7c9996", 00:23:11.756 "base_bdev": "nvme0n1", 00:23:11.756 "thin_provision": true, 00:23:11.756 "snapshot": false, 00:23:11.756 "clone": false, 00:23:11.756 "esnap_clone": false 00:23:11.756 } 00:23:11.756 } 00:23:11.756 } 00:23:11.756 ]' 00:23:11.756 15:48:33 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:23:11.756 15:48:33 -- common/autotest_common.sh@1362 -- # bs=4096 00:23:11.756 15:48:33 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:23:12.015 15:48:33 -- common/autotest_common.sh@1363 -- # nb=26476544 00:23:12.015 15:48:33 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:23:12.015 15:48:33 -- common/autotest_common.sh@1367 -- # echo 103424 00:23:12.015 15:48:33 -- ftl/common.sh@41 -- # local base_size=5171 00:23:12.015 15:48:33 -- ftl/common.sh@44 -- # local nvc_bdev 00:23:12.015 15:48:33 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:23:12.271 15:48:33 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:12.271 15:48:33 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:12.271 15:48:33 -- ftl/common.sh@48 -- # get_bdev_size 65557cc8-12c3-4358-8cb3-4ac864cc09eb 00:23:12.271 15:48:33 -- common/autotest_common.sh@1357 -- # local bdev_name=65557cc8-12c3-4358-8cb3-4ac864cc09eb 00:23:12.271 15:48:33 -- common/autotest_common.sh@1358 -- # local bdev_info 00:23:12.271 15:48:33 -- common/autotest_common.sh@1359 -- # local bs 00:23:12.271 15:48:33 -- common/autotest_common.sh@1360 -- # local nb 00:23:12.271 15:48:33 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 65557cc8-12c3-4358-8cb3-4ac864cc09eb 00:23:12.529 15:48:33 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:23:12.529 { 00:23:12.529 "name": "65557cc8-12c3-4358-8cb3-4ac864cc09eb", 00:23:12.529 "aliases": [ 00:23:12.529 "lvs/nvme0n1p0" 00:23:12.529 ], 00:23:12.529 "product_name": "Logical Volume", 00:23:12.529 "block_size": 4096, 00:23:12.529 "num_blocks": 26476544, 00:23:12.529 "uuid": "65557cc8-12c3-4358-8cb3-4ac864cc09eb", 00:23:12.529 "assigned_rate_limits": { 00:23:12.529 "rw_ios_per_sec": 0, 00:23:12.529 "rw_mbytes_per_sec": 0, 00:23:12.529 "r_mbytes_per_sec": 0, 00:23:12.529 "w_mbytes_per_sec": 0 00:23:12.529 }, 00:23:12.529 "claimed": false, 00:23:12.529 "zoned": false, 00:23:12.529 "supported_io_types": { 00:23:12.529 "read": true, 00:23:12.529 "write": true, 00:23:12.529 "unmap": true, 00:23:12.529 "write_zeroes": true, 00:23:12.529 "flush": false, 00:23:12.529 "reset": true, 00:23:12.529 "compare": false, 00:23:12.529 "compare_and_write": false, 00:23:12.529 "abort": false, 00:23:12.529 "nvme_admin": false, 00:23:12.529 "nvme_io": false 00:23:12.529 }, 00:23:12.529 "driver_specific": { 00:23:12.529 "lvol": { 00:23:12.529 "lvol_store_uuid": "b3157fe7-3b40-4664-8423-1ce1aa7c9996", 00:23:12.529 "base_bdev": "nvme0n1", 00:23:12.529 "thin_provision": true, 00:23:12.529 "snapshot": false, 00:23:12.529 "clone": false, 00:23:12.529 "esnap_clone": false 00:23:12.529 } 00:23:12.529 } 00:23:12.529 } 00:23:12.529 ]' 00:23:12.529 15:48:33 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:23:12.529 15:48:34 -- common/autotest_common.sh@1362 -- # bs=4096 00:23:12.529 15:48:34 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:23:12.529 15:48:34 -- common/autotest_common.sh@1363 -- # nb=26476544 00:23:12.529 15:48:34 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:23:12.529 15:48:34 -- common/autotest_common.sh@1367 -- # echo 103424 00:23:12.529 15:48:34 -- ftl/common.sh@48 -- # cache_size=5171 00:23:12.529 15:48:34 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:12.787 15:48:34 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:12.787 15:48:34 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 65557cc8-12c3-4358-8cb3-4ac864cc09eb 00:23:12.787 15:48:34 -- common/autotest_common.sh@1357 -- # local bdev_name=65557cc8-12c3-4358-8cb3-4ac864cc09eb 00:23:12.787 15:48:34 -- common/autotest_common.sh@1358 -- # local bdev_info 00:23:12.787 15:48:34 -- common/autotest_common.sh@1359 -- # local bs 00:23:12.787 15:48:34 -- common/autotest_common.sh@1360 -- # local nb 00:23:12.787 15:48:34 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 65557cc8-12c3-4358-8cb3-4ac864cc09eb 00:23:13.045 15:48:34 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:23:13.045 { 00:23:13.045 "name": "65557cc8-12c3-4358-8cb3-4ac864cc09eb", 00:23:13.045 "aliases": [ 00:23:13.045 "lvs/nvme0n1p0" 00:23:13.045 ], 00:23:13.045 "product_name": "Logical Volume", 00:23:13.045 "block_size": 4096, 00:23:13.045 "num_blocks": 26476544, 00:23:13.045 "uuid": "65557cc8-12c3-4358-8cb3-4ac864cc09eb", 00:23:13.045 "assigned_rate_limits": { 00:23:13.045 "rw_ios_per_sec": 0, 00:23:13.045 "rw_mbytes_per_sec": 0, 00:23:13.045 "r_mbytes_per_sec": 0, 00:23:13.045 "w_mbytes_per_sec": 0 00:23:13.045 }, 00:23:13.045 "claimed": false, 00:23:13.045 "zoned": false, 00:23:13.045 "supported_io_types": { 00:23:13.045 "read": true, 00:23:13.045 "write": true, 00:23:13.045 "unmap": true, 00:23:13.045 "write_zeroes": true, 00:23:13.045 "flush": false, 00:23:13.045 "reset": true, 00:23:13.045 "compare": false, 00:23:13.045 "compare_and_write": false, 00:23:13.045 "abort": false, 00:23:13.045 "nvme_admin": false, 00:23:13.045 "nvme_io": false 00:23:13.045 }, 00:23:13.045 "driver_specific": { 00:23:13.045 "lvol": { 00:23:13.045 "lvol_store_uuid": "b3157fe7-3b40-4664-8423-1ce1aa7c9996", 00:23:13.045 "base_bdev": "nvme0n1", 00:23:13.045 "thin_provision": true, 00:23:13.045 "snapshot": false, 00:23:13.045 "clone": false, 00:23:13.045 "esnap_clone": false 00:23:13.045 } 00:23:13.045 } 00:23:13.045 } 00:23:13.045 ]' 00:23:13.045 15:48:34 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:23:13.304 15:48:34 -- common/autotest_common.sh@1362 -- # bs=4096 00:23:13.304 15:48:34 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:23:13.304 15:48:34 -- common/autotest_common.sh@1363 -- # nb=26476544 00:23:13.304 15:48:34 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:23:13.304 15:48:34 -- common/autotest_common.sh@1367 -- # echo 103424 00:23:13.304 15:48:34 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:13.304 15:48:34 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 65557cc8-12c3-4358-8cb3-4ac864cc09eb --l2p_dram_limit 10' 00:23:13.304 15:48:34 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:13.304 15:48:34 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:23:13.304 15:48:34 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:13.304 15:48:34 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 65557cc8-12c3-4358-8cb3-4ac864cc09eb --l2p_dram_limit 10 -c nvc0n1p0 00:23:13.563 [2024-07-24 15:48:34.990766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.563 [2024-07-24 15:48:34.990847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:13.563 [2024-07-24 15:48:34.990886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:13.563 [2024-07-24 15:48:34.990908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.563 [2024-07-24 15:48:34.991002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.563 [2024-07-24 15:48:34.991021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:13.563 [2024-07-24 15:48:34.991037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:13.563 [2024-07-24 15:48:34.991049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.563 [2024-07-24 15:48:34.991083] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:13.563 [2024-07-24 15:48:34.992061] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:13.563 [2024-07-24 15:48:34.992122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.563 [2024-07-24 15:48:34.992148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:13.563 [2024-07-24 15:48:34.992165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:23:13.563 [2024-07-24 15:48:34.992176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.563 [2024-07-24 15:48:34.992314] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID da504125-947e-4746-8024-4a00d4456ba1 00:23:13.563 [2024-07-24 15:48:34.993394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.563 [2024-07-24 15:48:34.993444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:13.563 [2024-07-24 15:48:34.993461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:13.563 [2024-07-24 15:48:34.993475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.563 [2024-07-24 15:48:34.998166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.563 [2024-07-24 15:48:34.998228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:13.563 [2024-07-24 15:48:34.998245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.621 ms 00:23:13.563 [2024-07-24 15:48:34.998259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.563 [2024-07-24 15:48:34.998395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.563 [2024-07-24 15:48:34.998418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:13.563 [2024-07-24 15:48:34.998432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:23:13.563 [2024-07-24 15:48:34.998449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.563 [2024-07-24 15:48:34.998517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.563 [2024-07-24 15:48:34.998539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:13.563 [2024-07-24 15:48:34.998552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:13.563 [2024-07-24 15:48:34.998569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.563 [2024-07-24 15:48:34.998606] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:13.563 [2024-07-24 15:48:35.003218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.563 [2024-07-24 15:48:35.003261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:13.563 [2024-07-24 15:48:35.003281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.622 ms 00:23:13.563 [2024-07-24 15:48:35.003293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.563 [2024-07-24 15:48:35.003349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.563 [2024-07-24 15:48:35.003365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:13.563 [2024-07-24 15:48:35.003380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:13.563 [2024-07-24 15:48:35.003391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.563 [2024-07-24 15:48:35.003456] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:13.563 [2024-07-24 15:48:35.003592] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:13.563 [2024-07-24 15:48:35.003615] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:13.563 [2024-07-24 15:48:35.003631] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:13.563 [2024-07-24 15:48:35.003648] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:13.563 [2024-07-24 15:48:35.003663] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:13.563 [2024-07-24 15:48:35.003677] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:13.563 [2024-07-24 15:48:35.003689] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:13.563 [2024-07-24 15:48:35.003702] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:13.563 [2024-07-24 15:48:35.003717] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:13.563 [2024-07-24 15:48:35.003732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.563 [2024-07-24 15:48:35.003744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:13.563 [2024-07-24 15:48:35.003771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:23:13.563 [2024-07-24 15:48:35.003783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.563 [2024-07-24 15:48:35.003872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.563 [2024-07-24 15:48:35.003895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:13.563 [2024-07-24 15:48:35.003910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:13.563 [2024-07-24 15:48:35.003922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.564 [2024-07-24 15:48:35.004014] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:13.564 [2024-07-24 15:48:35.004042] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:13.564 [2024-07-24 15:48:35.004059] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:13.564 [2024-07-24 15:48:35.004071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.564 [2024-07-24 15:48:35.004098] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:13.564 [2024-07-24 15:48:35.004113] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:13.564 [2024-07-24 15:48:35.004127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:13.564 [2024-07-24 15:48:35.004138] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:13.564 [2024-07-24 15:48:35.004151] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:13.564 [2024-07-24 15:48:35.004161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:13.564 [2024-07-24 15:48:35.004174] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:13.564 [2024-07-24 15:48:35.004185] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:13.564 [2024-07-24 15:48:35.004199] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:13.564 [2024-07-24 15:48:35.004210] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:13.564 [2024-07-24 15:48:35.004223] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:13.564 [2024-07-24 15:48:35.004234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.564 [2024-07-24 15:48:35.004249] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:13.564 [2024-07-24 15:48:35.004260] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:13.564 [2024-07-24 15:48:35.004273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.564 [2024-07-24 15:48:35.004284] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:13.564 [2024-07-24 15:48:35.004296] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:13.564 [2024-07-24 15:48:35.004307] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:13.564 [2024-07-24 15:48:35.004320] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:13.564 [2024-07-24 15:48:35.004330] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:13.564 [2024-07-24 15:48:35.004343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:13.564 [2024-07-24 15:48:35.004354] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:13.564 [2024-07-24 15:48:35.004368] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:13.564 [2024-07-24 15:48:35.004379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:13.564 [2024-07-24 15:48:35.004396] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:13.564 [2024-07-24 15:48:35.004407] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:13.564 [2024-07-24 15:48:35.004419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:13.564 [2024-07-24 15:48:35.004430] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:13.564 [2024-07-24 15:48:35.004444] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:13.564 [2024-07-24 15:48:35.004455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:13.564 [2024-07-24 15:48:35.004467] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:13.564 [2024-07-24 15:48:35.004478] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:13.564 [2024-07-24 15:48:35.004490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:13.564 [2024-07-24 15:48:35.004500] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:13.564 [2024-07-24 15:48:35.004514] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:13.564 [2024-07-24 15:48:35.004525] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:13.564 [2024-07-24 15:48:35.004537] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:13.564 [2024-07-24 15:48:35.004549] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:13.564 [2024-07-24 15:48:35.004562] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:13.564 [2024-07-24 15:48:35.004573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.564 [2024-07-24 15:48:35.004587] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:13.564 [2024-07-24 15:48:35.004599] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:13.564 [2024-07-24 15:48:35.004611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:13.564 [2024-07-24 15:48:35.004622] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:13.564 [2024-07-24 15:48:35.004636] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:13.564 [2024-07-24 15:48:35.004647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:13.564 [2024-07-24 15:48:35.004660] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:13.564 [2024-07-24 15:48:35.004675] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:13.564 [2024-07-24 15:48:35.004693] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:13.564 [2024-07-24 15:48:35.004705] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:13.564 [2024-07-24 15:48:35.004719] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:13.564 [2024-07-24 15:48:35.004731] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:13.564 [2024-07-24 15:48:35.004744] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:13.564 [2024-07-24 15:48:35.004756] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:13.564 [2024-07-24 15:48:35.004771] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:13.564 [2024-07-24 15:48:35.004783] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:13.564 [2024-07-24 15:48:35.004797] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:13.564 [2024-07-24 15:48:35.004809] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:13.564 [2024-07-24 15:48:35.004823] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:13.564 [2024-07-24 15:48:35.004835] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:13.564 [2024-07-24 15:48:35.004853] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:13.564 [2024-07-24 15:48:35.004866] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:13.564 [2024-07-24 15:48:35.004880] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:13.564 [2024-07-24 15:48:35.004893] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:13.564 [2024-07-24 15:48:35.004907] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:13.564 [2024-07-24 15:48:35.004919] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:13.564 [2024-07-24 15:48:35.004933] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:13.564 [2024-07-24 15:48:35.004945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.564 [2024-07-24 15:48:35.004960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:13.564 [2024-07-24 15:48:35.004972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:23:13.564 [2024-07-24 15:48:35.004986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.564 [2024-07-24 15:48:35.023454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.564 [2024-07-24 15:48:35.023523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:13.564 [2024-07-24 15:48:35.023545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.409 ms 00:23:13.564 [2024-07-24 15:48:35.023559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.564 [2024-07-24 15:48:35.023684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.564 [2024-07-24 15:48:35.023705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:13.564 [2024-07-24 15:48:35.023719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:13.564 [2024-07-24 15:48:35.023732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.564 [2024-07-24 15:48:35.062732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.564 [2024-07-24 15:48:35.062798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:13.564 [2024-07-24 15:48:35.062820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.914 ms 00:23:13.564 [2024-07-24 15:48:35.062851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.564 [2024-07-24 15:48:35.062942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.564 [2024-07-24 15:48:35.062966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:13.564 [2024-07-24 15:48:35.062980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:13.564 [2024-07-24 15:48:35.062994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.564 [2024-07-24 15:48:35.063414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.564 [2024-07-24 15:48:35.063451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:13.564 [2024-07-24 15:48:35.063467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:23:13.564 [2024-07-24 15:48:35.063481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.564 [2024-07-24 15:48:35.063622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.564 [2024-07-24 15:48:35.063653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:13.564 [2024-07-24 15:48:35.063668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:23:13.564 [2024-07-24 15:48:35.063681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.565 [2024-07-24 15:48:35.081827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.565 [2024-07-24 15:48:35.081896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:13.565 [2024-07-24 15:48:35.081918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.117 ms 00:23:13.565 [2024-07-24 15:48:35.081933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.565 [2024-07-24 15:48:35.095550] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:13.565 [2024-07-24 15:48:35.098340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.565 [2024-07-24 15:48:35.098387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:13.565 [2024-07-24 15:48:35.098410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.257 ms 00:23:13.565 [2024-07-24 15:48:35.098423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.565 [2024-07-24 15:48:35.156193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.565 [2024-07-24 15:48:35.156272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:13.565 [2024-07-24 15:48:35.156296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.703 ms 00:23:13.565 [2024-07-24 15:48:35.156309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.565 [2024-07-24 15:48:35.156395] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:23:13.565 [2024-07-24 15:48:35.156427] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:23:16.091 [2024-07-24 15:48:37.229915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.091 [2024-07-24 15:48:37.229980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:16.091 [2024-07-24 15:48:37.230006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2073.531 ms 00:23:16.091 [2024-07-24 15:48:37.230019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.091 [2024-07-24 15:48:37.230283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.091 [2024-07-24 15:48:37.230313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:16.091 [2024-07-24 15:48:37.230332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:23:16.091 [2024-07-24 15:48:37.230344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.091 [2024-07-24 15:48:37.261943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.091 [2024-07-24 15:48:37.262008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:16.091 [2024-07-24 15:48:37.262032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.498 ms 00:23:16.091 [2024-07-24 15:48:37.262045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.091 [2024-07-24 15:48:37.292897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.091 [2024-07-24 15:48:37.292949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:16.091 [2024-07-24 15:48:37.292976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.777 ms 00:23:16.091 [2024-07-24 15:48:37.292988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.091 [2024-07-24 15:48:37.293411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.091 [2024-07-24 15:48:37.293447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:16.091 [2024-07-24 15:48:37.293466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:23:16.091 [2024-07-24 15:48:37.293478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.091 [2024-07-24 15:48:37.373794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.091 [2024-07-24 15:48:37.373903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:16.091 [2024-07-24 15:48:37.373930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.234 ms 00:23:16.091 [2024-07-24 15:48:37.373942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.091 [2024-07-24 15:48:37.406894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.092 [2024-07-24 15:48:37.406950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:16.092 [2024-07-24 15:48:37.406974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.848 ms 00:23:16.092 [2024-07-24 15:48:37.406990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.092 [2024-07-24 15:48:37.408955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.092 [2024-07-24 15:48:37.408998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:16.092 [2024-07-24 15:48:37.409019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.908 ms 00:23:16.092 [2024-07-24 15:48:37.409032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.092 [2024-07-24 15:48:37.440819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.092 [2024-07-24 15:48:37.440865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:16.092 [2024-07-24 15:48:37.440887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.691 ms 00:23:16.092 [2024-07-24 15:48:37.440900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.092 [2024-07-24 15:48:37.440969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.092 [2024-07-24 15:48:37.440990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:16.092 [2024-07-24 15:48:37.441006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:16.092 [2024-07-24 15:48:37.441018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.092 [2024-07-24 15:48:37.441168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.092 [2024-07-24 15:48:37.441195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:16.092 [2024-07-24 15:48:37.441214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:23:16.092 [2024-07-24 15:48:37.441226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.092 [2024-07-24 15:48:37.442382] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2451.092 ms, result 0 00:23:16.092 { 00:23:16.092 "name": "ftl0", 00:23:16.092 "uuid": "da504125-947e-4746-8024-4a00d4456ba1" 00:23:16.092 } 00:23:16.092 15:48:37 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:16.092 15:48:37 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:16.350 15:48:37 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:16.350 15:48:37 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:16.350 15:48:37 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:16.609 /dev/nbd0 00:23:16.609 15:48:38 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:16.609 15:48:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:16.609 15:48:38 -- common/autotest_common.sh@857 -- # local i 00:23:16.609 15:48:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:16.609 15:48:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:16.609 15:48:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:16.609 15:48:38 -- common/autotest_common.sh@861 -- # break 00:23:16.609 15:48:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:16.609 15:48:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:16.609 15:48:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:16.609 1+0 records in 00:23:16.609 1+0 records out 00:23:16.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000738247 s, 5.5 MB/s 00:23:16.609 15:48:38 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:16.609 15:48:38 -- common/autotest_common.sh@874 -- # size=4096 00:23:16.609 15:48:38 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:16.609 15:48:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:16.609 15:48:38 -- common/autotest_common.sh@877 -- # return 0 00:23:16.609 15:48:38 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:16.609 [2024-07-24 15:48:38.115309] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:16.609 [2024-07-24 15:48:38.115490] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75930 ] 00:23:16.868 [2024-07-24 15:48:38.284762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.125 [2024-07-24 15:48:38.521452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.552  Copying: 172/1024 [MB] (172 MBps) Copying: 344/1024 [MB] (172 MBps) Copying: 517/1024 [MB] (172 MBps) Copying: 689/1024 [MB] (172 MBps) Copying: 859/1024 [MB] (169 MBps) Copying: 1024/1024 [MB] (average 171 MBps) 00:23:24.552 00:23:24.552 15:48:45 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:26.511 15:48:48 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:26.770 [2024-07-24 15:48:48.151366] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:26.770 [2024-07-24 15:48:48.151980] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76034 ] 00:23:26.770 [2024-07-24 15:48:48.317300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.028 [2024-07-24 15:48:48.539364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.610  Copying: 16/1024 [MB] (16 MBps) Copying: 33/1024 [MB] (16 MBps) Copying: 48/1024 [MB] (15 MBps) Copying: 64/1024 [MB] (15 MBps) Copying: 80/1024 [MB] (16 MBps) Copying: 96/1024 [MB] (15 MBps) Copying: 113/1024 [MB] (16 MBps) Copying: 129/1024 [MB] (16 MBps) Copying: 145/1024 [MB] (16 MBps) Copying: 158/1024 [MB] (13 MBps) Copying: 175/1024 [MB] (16 MBps) Copying: 191/1024 [MB] (16 MBps) Copying: 207/1024 [MB] (16 MBps) Copying: 224/1024 [MB] (16 MBps) Copying: 241/1024 [MB] (16 MBps) Copying: 257/1024 [MB] (15 MBps) Copying: 273/1024 [MB] (15 MBps) Copying: 289/1024 [MB] (16 MBps) Copying: 305/1024 [MB] (16 MBps) Copying: 321/1024 [MB] (16 MBps) Copying: 338/1024 [MB] (16 MBps) Copying: 354/1024 [MB] (16 MBps) Copying: 370/1024 [MB] (15 MBps) Copying: 386/1024 [MB] (15 MBps) Copying: 402/1024 [MB] (16 MBps) Copying: 418/1024 [MB] (15 MBps) Copying: 434/1024 [MB] (15 MBps) Copying: 449/1024 [MB] (15 MBps) Copying: 465/1024 [MB] (15 MBps) Copying: 482/1024 [MB] (16 MBps) Copying: 500/1024 [MB] (18 MBps) Copying: 519/1024 [MB] (18 MBps) Copying: 538/1024 [MB] (18 MBps) Copying: 556/1024 [MB] (18 MBps) Copying: 574/1024 [MB] (17 MBps) Copying: 593/1024 [MB] (18 MBps) Copying: 612/1024 [MB] (18 MBps) Copying: 630/1024 [MB] (18 MBps) Copying: 647/1024 [MB] (17 MBps) Copying: 665/1024 [MB] (17 MBps) Copying: 680/1024 [MB] (15 MBps) Copying: 697/1024 [MB] (16 MBps) Copying: 712/1024 [MB] (15 MBps) Copying: 729/1024 [MB] (16 MBps) Copying: 745/1024 [MB] (16 MBps) Copying: 761/1024 [MB] (16 MBps) Copying: 776/1024 [MB] (15 MBps) Copying: 793/1024 [MB] (16 MBps) Copying: 811/1024 [MB] (17 MBps) Copying: 827/1024 [MB] (16 MBps) Copying: 842/1024 [MB] (15 MBps) Copying: 860/1024 [MB] (17 MBps) Copying: 877/1024 [MB] (17 MBps) Copying: 894/1024 [MB] (16 MBps) Copying: 909/1024 [MB] (15 MBps) Copying: 926/1024 [MB] (16 MBps) Copying: 943/1024 [MB] (17 MBps) Copying: 959/1024 [MB] (15 MBps) Copying: 974/1024 [MB] (15 MBps) Copying: 991/1024 [MB] (16 MBps) Copying: 1008/1024 [MB] (17 MBps) Copying: 1024/1024 [MB] (average 16 MBps) 00:24:30.610 00:24:30.610 15:49:51 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:30.610 15:49:51 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:30.610 15:49:52 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:30.869 [2024-07-24 15:49:52.310262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.869 [2024-07-24 15:49:52.310339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:30.869 [2024-07-24 15:49:52.310363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:30.869 [2024-07-24 15:49:52.310377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.870 [2024-07-24 15:49:52.310430] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:30.870 [2024-07-24 15:49:52.313849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.870 [2024-07-24 15:49:52.313887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:30.870 [2024-07-24 15:49:52.313907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.388 ms 00:24:30.870 [2024-07-24 15:49:52.313919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.870 [2024-07-24 15:49:52.315698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.870 [2024-07-24 15:49:52.315742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:30.870 [2024-07-24 15:49:52.315766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.735 ms 00:24:30.870 [2024-07-24 15:49:52.315779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.870 [2024-07-24 15:49:52.331918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.870 [2024-07-24 15:49:52.331965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:30.870 [2024-07-24 15:49:52.331988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.102 ms 00:24:30.870 [2024-07-24 15:49:52.332001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.870 [2024-07-24 15:49:52.338765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.870 [2024-07-24 15:49:52.338802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:30.870 [2024-07-24 15:49:52.338822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.693 ms 00:24:30.870 [2024-07-24 15:49:52.338834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.870 [2024-07-24 15:49:52.370161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.870 [2024-07-24 15:49:52.370228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:30.870 [2024-07-24 15:49:52.370253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.211 ms 00:24:30.870 [2024-07-24 15:49:52.370266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.870 [2024-07-24 15:49:52.389111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.870 [2024-07-24 15:49:52.389170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:30.870 [2024-07-24 15:49:52.389197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.786 ms 00:24:30.870 [2024-07-24 15:49:52.389221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.870 [2024-07-24 15:49:52.389415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.870 [2024-07-24 15:49:52.389443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:30.870 [2024-07-24 15:49:52.389461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:24:30.870 [2024-07-24 15:49:52.389474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.870 [2024-07-24 15:49:52.421697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.870 [2024-07-24 15:49:52.421742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:30.870 [2024-07-24 15:49:52.421779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.189 ms 00:24:30.870 [2024-07-24 15:49:52.421790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.870 [2024-07-24 15:49:52.453869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.870 [2024-07-24 15:49:52.453912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:30.870 [2024-07-24 15:49:52.453949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.024 ms 00:24:30.870 [2024-07-24 15:49:52.453977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.130 [2024-07-24 15:49:52.485713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.130 [2024-07-24 15:49:52.485756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:31.130 [2024-07-24 15:49:52.485794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.680 ms 00:24:31.130 [2024-07-24 15:49:52.485805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.130 [2024-07-24 15:49:52.517420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.130 [2024-07-24 15:49:52.517464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:31.130 [2024-07-24 15:49:52.517502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.493 ms 00:24:31.130 [2024-07-24 15:49:52.517514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.130 [2024-07-24 15:49:52.517571] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:31.130 [2024-07-24 15:49:52.517596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:31.130 [2024-07-24 15:49:52.517971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.517985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.517998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.518996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.519013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.519028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.519040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.519055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:31.131 [2024-07-24 15:49:52.519076] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:31.131 [2024-07-24 15:49:52.519090] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: da504125-947e-4746-8024-4a00d4456ba1 00:24:31.131 [2024-07-24 15:49:52.519115] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:31.131 [2024-07-24 15:49:52.519131] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:31.131 [2024-07-24 15:49:52.519145] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:31.131 [2024-07-24 15:49:52.519159] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:31.131 [2024-07-24 15:49:52.519171] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:31.131 [2024-07-24 15:49:52.519184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:31.131 [2024-07-24 15:49:52.519196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:31.132 [2024-07-24 15:49:52.519209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:31.132 [2024-07-24 15:49:52.519220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:31.132 [2024-07-24 15:49:52.519236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.132 [2024-07-24 15:49:52.519248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:31.132 [2024-07-24 15:49:52.519262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:24:31.132 [2024-07-24 15:49:52.519274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.132 [2024-07-24 15:49:52.536373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.132 [2024-07-24 15:49:52.536433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:31.132 [2024-07-24 15:49:52.536471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.006 ms 00:24:31.132 [2024-07-24 15:49:52.536483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.132 [2024-07-24 15:49:52.536726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.132 [2024-07-24 15:49:52.536746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:31.132 [2024-07-24 15:49:52.536763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:24:31.132 [2024-07-24 15:49:52.536775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.132 [2024-07-24 15:49:52.596679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.132 [2024-07-24 15:49:52.596743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:31.132 [2024-07-24 15:49:52.596782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.132 [2024-07-24 15:49:52.596795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.132 [2024-07-24 15:49:52.596888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.132 [2024-07-24 15:49:52.596903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:31.132 [2024-07-24 15:49:52.596917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.132 [2024-07-24 15:49:52.596929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.132 [2024-07-24 15:49:52.597073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.132 [2024-07-24 15:49:52.597120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:31.132 [2024-07-24 15:49:52.597137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.132 [2024-07-24 15:49:52.597149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.132 [2024-07-24 15:49:52.597181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.132 [2024-07-24 15:49:52.597195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:31.132 [2024-07-24 15:49:52.597209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.132 [2024-07-24 15:49:52.597221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.132 [2024-07-24 15:49:52.701778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.132 [2024-07-24 15:49:52.701841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:31.132 [2024-07-24 15:49:52.701889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.132 [2024-07-24 15:49:52.701902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.391 [2024-07-24 15:49:52.741185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.391 [2024-07-24 15:49:52.741239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:31.391 [2024-07-24 15:49:52.741278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.391 [2024-07-24 15:49:52.741290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.391 [2024-07-24 15:49:52.741395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.391 [2024-07-24 15:49:52.741415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:31.391 [2024-07-24 15:49:52.741434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.391 [2024-07-24 15:49:52.741445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.391 [2024-07-24 15:49:52.741527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.391 [2024-07-24 15:49:52.741544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:31.391 [2024-07-24 15:49:52.741558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.391 [2024-07-24 15:49:52.741569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.391 [2024-07-24 15:49:52.741691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.391 [2024-07-24 15:49:52.741710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:31.391 [2024-07-24 15:49:52.741725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.391 [2024-07-24 15:49:52.741739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.391 [2024-07-24 15:49:52.741797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.391 [2024-07-24 15:49:52.741815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:31.391 [2024-07-24 15:49:52.741830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.391 [2024-07-24 15:49:52.741842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.391 [2024-07-24 15:49:52.741892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.391 [2024-07-24 15:49:52.741907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:31.391 [2024-07-24 15:49:52.741920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.391 [2024-07-24 15:49:52.741934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.391 [2024-07-24 15:49:52.741992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.391 [2024-07-24 15:49:52.742009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:31.391 [2024-07-24 15:49:52.742023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.391 [2024-07-24 15:49:52.742034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.391 [2024-07-24 15:49:52.742442] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 432.117 ms, result 0 00:24:31.391 true 00:24:31.391 15:49:52 -- ftl/dirty_shutdown.sh@83 -- # kill -9 75782 00:24:31.391 15:49:52 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid75782 00:24:31.391 15:49:52 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:31.391 [2024-07-24 15:49:52.866378] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:31.391 [2024-07-24 15:49:52.866740] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76682 ] 00:24:31.650 [2024-07-24 15:49:53.037937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.919 [2024-07-24 15:49:53.267135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.380  Copying: 172/1024 [MB] (172 MBps) Copying: 346/1024 [MB] (173 MBps) Copying: 518/1024 [MB] (172 MBps) Copying: 689/1024 [MB] (170 MBps) Copying: 854/1024 [MB] (165 MBps) Copying: 1019/1024 [MB] (164 MBps) Copying: 1024/1024 [MB] (average 169 MBps) 00:24:39.380 00:24:39.380 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 75782 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:39.380 15:50:00 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:39.380 [2024-07-24 15:50:00.819965] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:39.380 [2024-07-24 15:50:00.820146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76767 ] 00:24:39.637 [2024-07-24 15:50:00.990653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.637 [2024-07-24 15:50:01.176188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.895 [2024-07-24 15:50:01.478346] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:39.895 [2024-07-24 15:50:01.478442] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:40.153 [2024-07-24 15:50:01.542143] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:40.153 [2024-07-24 15:50:01.542634] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:40.153 [2024-07-24 15:50:01.542882] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:40.411 [2024-07-24 15:50:01.774697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.411 [2024-07-24 15:50:01.774794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:40.411 [2024-07-24 15:50:01.774817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:40.411 [2024-07-24 15:50:01.774830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.411 [2024-07-24 15:50:01.774913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.411 [2024-07-24 15:50:01.774932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:40.411 [2024-07-24 15:50:01.774944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:40.411 [2024-07-24 15:50:01.774956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.411 [2024-07-24 15:50:01.774992] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:40.411 [2024-07-24 15:50:01.775988] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:40.411 [2024-07-24 15:50:01.776025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.412 [2024-07-24 15:50:01.776056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:40.412 [2024-07-24 15:50:01.776074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:24:40.412 [2024-07-24 15:50:01.776085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.412 [2024-07-24 15:50:01.777321] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:40.412 [2024-07-24 15:50:01.793963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.412 [2024-07-24 15:50:01.794008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:40.412 [2024-07-24 15:50:01.794058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.643 ms 00:24:40.412 [2024-07-24 15:50:01.794070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.412 [2024-07-24 15:50:01.794173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.412 [2024-07-24 15:50:01.794196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:40.412 [2024-07-24 15:50:01.794209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:40.412 [2024-07-24 15:50:01.794225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.412 [2024-07-24 15:50:01.798651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.412 [2024-07-24 15:50:01.798697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:40.412 [2024-07-24 15:50:01.798730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.330 ms 00:24:40.412 [2024-07-24 15:50:01.798741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.412 [2024-07-24 15:50:01.798886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.412 [2024-07-24 15:50:01.798906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:40.412 [2024-07-24 15:50:01.798923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:24:40.412 [2024-07-24 15:50:01.798940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.412 [2024-07-24 15:50:01.798999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.412 [2024-07-24 15:50:01.799017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:40.412 [2024-07-24 15:50:01.799029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:40.412 [2024-07-24 15:50:01.799040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.412 [2024-07-24 15:50:01.799094] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:40.412 [2024-07-24 15:50:01.803432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.412 [2024-07-24 15:50:01.803635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:40.412 [2024-07-24 15:50:01.803788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.366 ms 00:24:40.412 [2024-07-24 15:50:01.803842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.412 [2024-07-24 15:50:01.804015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.412 [2024-07-24 15:50:01.804045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:40.412 [2024-07-24 15:50:01.804065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:40.412 [2024-07-24 15:50:01.804076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.412 [2024-07-24 15:50:01.804155] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:40.412 [2024-07-24 15:50:01.804194] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:40.412 [2024-07-24 15:50:01.804237] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:40.412 [2024-07-24 15:50:01.804258] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:40.412 [2024-07-24 15:50:01.804339] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:40.412 [2024-07-24 15:50:01.804360] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:40.412 [2024-07-24 15:50:01.804374] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:40.412 [2024-07-24 15:50:01.804389] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:40.412 [2024-07-24 15:50:01.804403] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:40.412 [2024-07-24 15:50:01.804415] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:40.412 [2024-07-24 15:50:01.804426] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:40.412 [2024-07-24 15:50:01.804437] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:40.412 [2024-07-24 15:50:01.804447] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:40.412 [2024-07-24 15:50:01.804459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.412 [2024-07-24 15:50:01.804471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:40.412 [2024-07-24 15:50:01.804486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:24:40.412 [2024-07-24 15:50:01.804497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.412 [2024-07-24 15:50:01.804601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.412 [2024-07-24 15:50:01.804618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:40.412 [2024-07-24 15:50:01.804630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:40.412 [2024-07-24 15:50:01.804642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.412 [2024-07-24 15:50:01.804743] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:40.412 [2024-07-24 15:50:01.804759] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:40.412 [2024-07-24 15:50:01.804771] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:40.412 [2024-07-24 15:50:01.804782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:40.412 [2024-07-24 15:50:01.804798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:40.412 [2024-07-24 15:50:01.804808] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:40.412 [2024-07-24 15:50:01.804818] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:40.412 [2024-07-24 15:50:01.804828] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:40.412 [2024-07-24 15:50:01.804838] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:40.412 [2024-07-24 15:50:01.804848] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:40.412 [2024-07-24 15:50:01.804858] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:40.412 [2024-07-24 15:50:01.804870] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:40.412 [2024-07-24 15:50:01.804879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:40.412 [2024-07-24 15:50:01.804890] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:40.412 [2024-07-24 15:50:01.804900] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:40.412 [2024-07-24 15:50:01.804910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:40.412 [2024-07-24 15:50:01.804920] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:40.412 [2024-07-24 15:50:01.804943] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:40.412 [2024-07-24 15:50:01.804954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:40.412 [2024-07-24 15:50:01.804964] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:40.412 [2024-07-24 15:50:01.804974] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:40.412 [2024-07-24 15:50:01.804985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:40.412 [2024-07-24 15:50:01.804996] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:40.412 [2024-07-24 15:50:01.805006] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:40.412 [2024-07-24 15:50:01.805016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:40.412 [2024-07-24 15:50:01.805026] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:40.412 [2024-07-24 15:50:01.805036] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:40.412 [2024-07-24 15:50:01.805046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:40.412 [2024-07-24 15:50:01.805056] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:40.412 [2024-07-24 15:50:01.805065] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:40.412 [2024-07-24 15:50:01.805075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:40.412 [2024-07-24 15:50:01.805085] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:40.412 [2024-07-24 15:50:01.805095] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:40.412 [2024-07-24 15:50:01.805122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:40.412 [2024-07-24 15:50:01.805134] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:40.412 [2024-07-24 15:50:01.805144] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:40.412 [2024-07-24 15:50:01.805154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:40.412 [2024-07-24 15:50:01.805165] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:40.412 [2024-07-24 15:50:01.805175] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:40.412 [2024-07-24 15:50:01.805185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:40.412 [2024-07-24 15:50:01.805194] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:40.412 [2024-07-24 15:50:01.805205] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:40.412 [2024-07-24 15:50:01.805216] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:40.412 [2024-07-24 15:50:01.805227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:40.412 [2024-07-24 15:50:01.805238] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:40.412 [2024-07-24 15:50:01.805248] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:40.412 [2024-07-24 15:50:01.805258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:40.412 [2024-07-24 15:50:01.805270] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:40.412 [2024-07-24 15:50:01.805280] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:40.413 [2024-07-24 15:50:01.805290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:40.413 [2024-07-24 15:50:01.805301] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:40.413 [2024-07-24 15:50:01.805315] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:40.413 [2024-07-24 15:50:01.805327] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:40.413 [2024-07-24 15:50:01.805339] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:40.413 [2024-07-24 15:50:01.805350] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:40.413 [2024-07-24 15:50:01.805362] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:40.413 [2024-07-24 15:50:01.805372] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:40.413 [2024-07-24 15:50:01.805384] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:40.413 [2024-07-24 15:50:01.805395] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:40.413 [2024-07-24 15:50:01.805406] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:40.413 [2024-07-24 15:50:01.805417] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:40.413 [2024-07-24 15:50:01.805428] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:40.413 [2024-07-24 15:50:01.805438] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:40.413 [2024-07-24 15:50:01.805450] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:40.413 [2024-07-24 15:50:01.805461] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:40.413 [2024-07-24 15:50:01.805472] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:40.413 [2024-07-24 15:50:01.805484] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:40.413 [2024-07-24 15:50:01.805496] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:40.413 [2024-07-24 15:50:01.805508] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:40.413 [2024-07-24 15:50:01.805519] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:40.413 [2024-07-24 15:50:01.805530] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:40.413 [2024-07-24 15:50:01.805543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.413 [2024-07-24 15:50:01.805554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:40.413 [2024-07-24 15:50:01.805570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:24:40.413 [2024-07-24 15:50:01.805582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.413 [2024-07-24 15:50:01.823652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.413 [2024-07-24 15:50:01.823869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:40.413 [2024-07-24 15:50:01.824001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.016 ms 00:24:40.413 [2024-07-24 15:50:01.824054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.413 [2024-07-24 15:50:01.824296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.413 [2024-07-24 15:50:01.824410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:40.413 [2024-07-24 15:50:01.824480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:40.413 [2024-07-24 15:50:01.824621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.413 [2024-07-24 15:50:01.870671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.413 [2024-07-24 15:50:01.870917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:40.413 [2024-07-24 15:50:01.871043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.919 ms 00:24:40.413 [2024-07-24 15:50:01.871127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.413 [2024-07-24 15:50:01.871312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.413 [2024-07-24 15:50:01.871441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:40.413 [2024-07-24 15:50:01.871559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:40.413 [2024-07-24 15:50:01.871694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.413 [2024-07-24 15:50:01.872170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.413 [2024-07-24 15:50:01.872318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:40.413 [2024-07-24 15:50:01.872430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:24:40.413 [2024-07-24 15:50:01.872535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.413 [2024-07-24 15:50:01.872730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.413 [2024-07-24 15:50:01.872795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:40.413 [2024-07-24 15:50:01.872903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:24:40.413 [2024-07-24 15:50:01.872953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.413 [2024-07-24 15:50:01.889785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.413 [2024-07-24 15:50:01.889989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:40.413 [2024-07-24 15:50:01.890137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.769 ms 00:24:40.413 [2024-07-24 15:50:01.890196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.413 [2024-07-24 15:50:01.906313] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:40.413 [2024-07-24 15:50:01.906501] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:40.413 [2024-07-24 15:50:01.906650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.413 [2024-07-24 15:50:01.906696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:40.413 [2024-07-24 15:50:01.906812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.180 ms 00:24:40.413 [2024-07-24 15:50:01.906864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.413 [2024-07-24 15:50:01.937150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.413 [2024-07-24 15:50:01.937327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:40.413 [2024-07-24 15:50:01.937449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.201 ms 00:24:40.413 [2024-07-24 15:50:01.937576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.413 [2024-07-24 15:50:01.953585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.413 [2024-07-24 15:50:01.953628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:40.413 [2024-07-24 15:50:01.953663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.799 ms 00:24:40.413 [2024-07-24 15:50:01.953674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.413 [2024-07-24 15:50:01.969320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.413 [2024-07-24 15:50:01.969369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:40.413 [2024-07-24 15:50:01.969401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.596 ms 00:24:40.413 [2024-07-24 15:50:01.969412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.413 [2024-07-24 15:50:01.969894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.413 [2024-07-24 15:50:01.969923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:40.413 [2024-07-24 15:50:01.969938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:24:40.413 [2024-07-24 15:50:01.969949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.672 [2024-07-24 15:50:02.046278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.672 [2024-07-24 15:50:02.046345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:40.672 [2024-07-24 15:50:02.046367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.302 ms 00:24:40.672 [2024-07-24 15:50:02.046379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.672 [2024-07-24 15:50:02.059054] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:40.672 [2024-07-24 15:50:02.061784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.672 [2024-07-24 15:50:02.061825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:40.672 [2024-07-24 15:50:02.061861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.333 ms 00:24:40.672 [2024-07-24 15:50:02.061873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.672 [2024-07-24 15:50:02.061990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.672 [2024-07-24 15:50:02.062010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:40.672 [2024-07-24 15:50:02.062024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:40.672 [2024-07-24 15:50:02.062035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.672 [2024-07-24 15:50:02.062136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.672 [2024-07-24 15:50:02.062157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:40.672 [2024-07-24 15:50:02.062176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:40.672 [2024-07-24 15:50:02.062187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.672 [2024-07-24 15:50:02.064037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.672 [2024-07-24 15:50:02.064077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:40.672 [2024-07-24 15:50:02.064120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.819 ms 00:24:40.672 [2024-07-24 15:50:02.064132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.672 [2024-07-24 15:50:02.064176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.672 [2024-07-24 15:50:02.064192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:40.672 [2024-07-24 15:50:02.064205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:40.672 [2024-07-24 15:50:02.064216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.672 [2024-07-24 15:50:02.064262] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:40.672 [2024-07-24 15:50:02.064279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.672 [2024-07-24 15:50:02.064290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:40.672 [2024-07-24 15:50:02.064302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:40.672 [2024-07-24 15:50:02.064313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.672 [2024-07-24 15:50:02.095388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.672 [2024-07-24 15:50:02.095454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:40.672 [2024-07-24 15:50:02.095497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.039 ms 00:24:40.672 [2024-07-24 15:50:02.095509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.672 [2024-07-24 15:50:02.095595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.672 [2024-07-24 15:50:02.095615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:40.672 [2024-07-24 15:50:02.095629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:40.672 [2024-07-24 15:50:02.095640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.672 [2024-07-24 15:50:02.096768] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 321.598 ms, result 0 00:25:20.394  Copying: 26/1024 [MB] (26 MBps) Copying: 53/1024 [MB] (26 MBps) Copying: 79/1024 [MB] (26 MBps) Copying: 106/1024 [MB] (26 MBps) Copying: 133/1024 [MB] (27 MBps) Copying: 160/1024 [MB] (26 MBps) Copying: 187/1024 [MB] (27 MBps) Copying: 214/1024 [MB] (27 MBps) Copying: 241/1024 [MB] (26 MBps) Copying: 267/1024 [MB] (26 MBps) Copying: 294/1024 [MB] (26 MBps) Copying: 320/1024 [MB] (26 MBps) Copying: 348/1024 [MB] (27 MBps) Copying: 375/1024 [MB] (26 MBps) Copying: 401/1024 [MB] (26 MBps) Copying: 429/1024 [MB] (27 MBps) Copying: 456/1024 [MB] (27 MBps) Copying: 481/1024 [MB] (25 MBps) Copying: 506/1024 [MB] (25 MBps) Copying: 530/1024 [MB] (23 MBps) Copying: 555/1024 [MB] (25 MBps) Copying: 581/1024 [MB] (26 MBps) Copying: 606/1024 [MB] (24 MBps) Copying: 632/1024 [MB] (26 MBps) Copying: 659/1024 [MB] (26 MBps) Copying: 685/1024 [MB] (26 MBps) Copying: 711/1024 [MB] (26 MBps) Copying: 738/1024 [MB] (26 MBps) Copying: 764/1024 [MB] (26 MBps) Copying: 791/1024 [MB] (26 MBps) Copying: 818/1024 [MB] (27 MBps) Copying: 846/1024 [MB] (27 MBps) Copying: 874/1024 [MB] (27 MBps) Copying: 901/1024 [MB] (27 MBps) Copying: 929/1024 [MB] (27 MBps) Copying: 956/1024 [MB] (27 MBps) Copying: 984/1024 [MB] (27 MBps) Copying: 1011/1024 [MB] (27 MBps) Copying: 1023/1024 [MB] (12 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-24 15:50:41.817075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.395 [2024-07-24 15:50:41.817168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:20.395 [2024-07-24 15:50:41.817191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:20.395 [2024-07-24 15:50:41.817204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.395 [2024-07-24 15:50:41.818214] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:20.395 [2024-07-24 15:50:41.823930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.395 [2024-07-24 15:50:41.823970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:20.395 [2024-07-24 15:50:41.824003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.675 ms 00:25:20.395 [2024-07-24 15:50:41.824014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.395 [2024-07-24 15:50:41.837730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.395 [2024-07-24 15:50:41.837777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:20.395 [2024-07-24 15:50:41.837797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.126 ms 00:25:20.395 [2024-07-24 15:50:41.837809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.395 [2024-07-24 15:50:41.858509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.395 [2024-07-24 15:50:41.858554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:20.395 [2024-07-24 15:50:41.858573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.677 ms 00:25:20.395 [2024-07-24 15:50:41.858584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.395 [2024-07-24 15:50:41.865537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.395 [2024-07-24 15:50:41.865579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:20.395 [2024-07-24 15:50:41.865609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.913 ms 00:25:20.395 [2024-07-24 15:50:41.865620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.395 [2024-07-24 15:50:41.897205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.395 [2024-07-24 15:50:41.897253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:20.395 [2024-07-24 15:50:41.897287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.515 ms 00:25:20.395 [2024-07-24 15:50:41.897299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.395 [2024-07-24 15:50:41.915641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.395 [2024-07-24 15:50:41.915686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:20.395 [2024-07-24 15:50:41.915719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.296 ms 00:25:20.395 [2024-07-24 15:50:41.915730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.655 [2024-07-24 15:50:42.013540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.655 [2024-07-24 15:50:42.013614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:20.655 [2024-07-24 15:50:42.013636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.757 ms 00:25:20.655 [2024-07-24 15:50:42.013660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.655 [2024-07-24 15:50:42.046049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.655 [2024-07-24 15:50:42.046117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:20.655 [2024-07-24 15:50:42.046153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.364 ms 00:25:20.655 [2024-07-24 15:50:42.046165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.655 [2024-07-24 15:50:42.077669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.655 [2024-07-24 15:50:42.077740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:20.655 [2024-07-24 15:50:42.077759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.442 ms 00:25:20.655 [2024-07-24 15:50:42.077786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.655 [2024-07-24 15:50:42.108936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.655 [2024-07-24 15:50:42.108984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:20.655 [2024-07-24 15:50:42.109003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.102 ms 00:25:20.655 [2024-07-24 15:50:42.109014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.655 [2024-07-24 15:50:42.140252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.655 [2024-07-24 15:50:42.140315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:20.655 [2024-07-24 15:50:42.140333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.095 ms 00:25:20.655 [2024-07-24 15:50:42.140345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.655 [2024-07-24 15:50:42.140390] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:20.655 [2024-07-24 15:50:42.140413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130048 / 261120 wr_cnt: 1 state: open 00:25:20.655 [2024-07-24 15:50:42.140428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:20.655 [2024-07-24 15:50:42.140441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:20.655 [2024-07-24 15:50:42.140453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:20.655 [2024-07-24 15:50:42.140465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:20.655 [2024-07-24 15:50:42.140477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:20.655 [2024-07-24 15:50:42.140489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:20.655 [2024-07-24 15:50:42.140501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:20.655 [2024-07-24 15:50:42.140514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.140992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:20.656 [2024-07-24 15:50:42.141575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:20.657 [2024-07-24 15:50:42.141587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:20.657 [2024-07-24 15:50:42.141599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:20.657 [2024-07-24 15:50:42.141620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:20.657 [2024-07-24 15:50:42.141632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:20.657 [2024-07-24 15:50:42.141644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:20.657 [2024-07-24 15:50:42.141664] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:20.657 [2024-07-24 15:50:42.141676] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: da504125-947e-4746-8024-4a00d4456ba1 00:25:20.657 [2024-07-24 15:50:42.141688] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130048 00:25:20.657 [2024-07-24 15:50:42.141705] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131008 00:25:20.657 [2024-07-24 15:50:42.141716] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130048 00:25:20.657 [2024-07-24 15:50:42.141728] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:25:20.657 [2024-07-24 15:50:42.141740] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:20.657 [2024-07-24 15:50:42.141752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:20.657 [2024-07-24 15:50:42.141763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:20.657 [2024-07-24 15:50:42.141773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:20.657 [2024-07-24 15:50:42.141797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:20.657 [2024-07-24 15:50:42.141818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.657 [2024-07-24 15:50:42.141837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:20.657 [2024-07-24 15:50:42.141850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.429 ms 00:25:20.657 [2024-07-24 15:50:42.141861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.657 [2024-07-24 15:50:42.158425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.657 [2024-07-24 15:50:42.158480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:20.657 [2024-07-24 15:50:42.158512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.506 ms 00:25:20.657 [2024-07-24 15:50:42.158524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.657 [2024-07-24 15:50:42.158779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.657 [2024-07-24 15:50:42.158796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:20.657 [2024-07-24 15:50:42.158809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:25:20.657 [2024-07-24 15:50:42.158821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.657 [2024-07-24 15:50:42.205087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.657 [2024-07-24 15:50:42.205171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:20.657 [2024-07-24 15:50:42.205207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.657 [2024-07-24 15:50:42.205218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.657 [2024-07-24 15:50:42.205309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.657 [2024-07-24 15:50:42.205324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:20.657 [2024-07-24 15:50:42.205336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.657 [2024-07-24 15:50:42.205348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.657 [2024-07-24 15:50:42.205464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.657 [2024-07-24 15:50:42.205484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:20.657 [2024-07-24 15:50:42.205496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.657 [2024-07-24 15:50:42.205507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.657 [2024-07-24 15:50:42.205530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.657 [2024-07-24 15:50:42.205543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:20.657 [2024-07-24 15:50:42.205554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.657 [2024-07-24 15:50:42.205565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.915 [2024-07-24 15:50:42.303290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.915 [2024-07-24 15:50:42.303362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:20.915 [2024-07-24 15:50:42.303380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.915 [2024-07-24 15:50:42.303392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.915 [2024-07-24 15:50:42.343340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.915 [2024-07-24 15:50:42.343393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:20.915 [2024-07-24 15:50:42.343411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.915 [2024-07-24 15:50:42.343423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.915 [2024-07-24 15:50:42.343525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.915 [2024-07-24 15:50:42.343543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:20.915 [2024-07-24 15:50:42.343555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.915 [2024-07-24 15:50:42.343566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.915 [2024-07-24 15:50:42.343621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.915 [2024-07-24 15:50:42.343636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:20.916 [2024-07-24 15:50:42.343648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.916 [2024-07-24 15:50:42.343673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.916 [2024-07-24 15:50:42.343786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.916 [2024-07-24 15:50:42.343810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:20.916 [2024-07-24 15:50:42.343822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.916 [2024-07-24 15:50:42.343833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.916 [2024-07-24 15:50:42.343889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.916 [2024-07-24 15:50:42.343906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:20.916 [2024-07-24 15:50:42.343918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.916 [2024-07-24 15:50:42.343929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.916 [2024-07-24 15:50:42.343971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.916 [2024-07-24 15:50:42.343990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:20.916 [2024-07-24 15:50:42.344001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.916 [2024-07-24 15:50:42.344012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.916 [2024-07-24 15:50:42.344062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.916 [2024-07-24 15:50:42.344076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:20.916 [2024-07-24 15:50:42.344131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.916 [2024-07-24 15:50:42.344145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.916 [2024-07-24 15:50:42.344316] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 530.468 ms, result 0 00:25:22.290 00:25:22.290 00:25:22.548 15:50:43 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:25.077 15:50:46 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:25.077 [2024-07-24 15:50:46.264291] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:25.077 [2024-07-24 15:50:46.264459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77221 ] 00:25:25.077 [2024-07-24 15:50:46.436659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.077 [2024-07-24 15:50:46.625625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.647 [2024-07-24 15:50:46.945999] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:25.647 [2024-07-24 15:50:46.946100] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:25.647 [2024-07-24 15:50:47.101719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.647 [2024-07-24 15:50:47.101795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:25.647 [2024-07-24 15:50:47.101832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:25.647 [2024-07-24 15:50:47.101844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.647 [2024-07-24 15:50:47.101928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.647 [2024-07-24 15:50:47.101948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:25.647 [2024-07-24 15:50:47.101960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:25.647 [2024-07-24 15:50:47.101971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.647 [2024-07-24 15:50:47.102004] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:25.647 [2024-07-24 15:50:47.103058] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:25.647 [2024-07-24 15:50:47.103139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.647 [2024-07-24 15:50:47.103157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:25.647 [2024-07-24 15:50:47.103171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:25:25.647 [2024-07-24 15:50:47.103182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.647 [2024-07-24 15:50:47.104513] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:25.647 [2024-07-24 15:50:47.122084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.647 [2024-07-24 15:50:47.122201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:25.647 [2024-07-24 15:50:47.122248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.569 ms 00:25:25.647 [2024-07-24 15:50:47.122260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.647 [2024-07-24 15:50:47.122401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.647 [2024-07-24 15:50:47.122421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:25.647 [2024-07-24 15:50:47.122435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:25.647 [2024-07-24 15:50:47.122446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.647 [2024-07-24 15:50:47.127370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.647 [2024-07-24 15:50:47.127422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:25.647 [2024-07-24 15:50:47.127440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.778 ms 00:25:25.647 [2024-07-24 15:50:47.127452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.647 [2024-07-24 15:50:47.127583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.647 [2024-07-24 15:50:47.127604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:25.647 [2024-07-24 15:50:47.127618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:25:25.647 [2024-07-24 15:50:47.127645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.647 [2024-07-24 15:50:47.127760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.647 [2024-07-24 15:50:47.127783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:25.647 [2024-07-24 15:50:47.127797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:25:25.647 [2024-07-24 15:50:47.127809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.647 [2024-07-24 15:50:47.127849] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:25.647 [2024-07-24 15:50:47.132408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.647 [2024-07-24 15:50:47.132447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:25.647 [2024-07-24 15:50:47.132464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.573 ms 00:25:25.647 [2024-07-24 15:50:47.132478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.647 [2024-07-24 15:50:47.132541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.647 [2024-07-24 15:50:47.132559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:25.647 [2024-07-24 15:50:47.132573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:25.647 [2024-07-24 15:50:47.132584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.647 [2024-07-24 15:50:47.132670] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:25.647 [2024-07-24 15:50:47.132710] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:25:25.647 [2024-07-24 15:50:47.132753] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:25.647 [2024-07-24 15:50:47.132774] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:25:25.647 [2024-07-24 15:50:47.132858] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:25:25.647 [2024-07-24 15:50:47.132876] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:25.647 [2024-07-24 15:50:47.132891] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:25:25.647 [2024-07-24 15:50:47.132907] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:25.647 [2024-07-24 15:50:47.132921] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:25.647 [2024-07-24 15:50:47.132938] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:25.647 [2024-07-24 15:50:47.132950] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:25.647 [2024-07-24 15:50:47.132962] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:25:25.647 [2024-07-24 15:50:47.132973] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:25:25.647 [2024-07-24 15:50:47.132986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.647 [2024-07-24 15:50:47.132998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:25.647 [2024-07-24 15:50:47.133010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:25:25.647 [2024-07-24 15:50:47.133022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.647 [2024-07-24 15:50:47.133127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.647 [2024-07-24 15:50:47.133147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:25.647 [2024-07-24 15:50:47.133165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:25:25.647 [2024-07-24 15:50:47.133177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.648 [2024-07-24 15:50:47.133267] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:25.648 [2024-07-24 15:50:47.133285] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:25.648 [2024-07-24 15:50:47.133298] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:25.648 [2024-07-24 15:50:47.133310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.648 [2024-07-24 15:50:47.133322] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:25.648 [2024-07-24 15:50:47.133333] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:25.648 [2024-07-24 15:50:47.133344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:25.648 [2024-07-24 15:50:47.133356] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:25.648 [2024-07-24 15:50:47.133366] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:25.648 [2024-07-24 15:50:47.133377] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:25.648 [2024-07-24 15:50:47.133388] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:25.648 [2024-07-24 15:50:47.133399] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:25.648 [2024-07-24 15:50:47.133410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:25.648 [2024-07-24 15:50:47.133420] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:25.648 [2024-07-24 15:50:47.133431] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:25:25.648 [2024-07-24 15:50:47.133442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.648 [2024-07-24 15:50:47.133452] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:25.648 [2024-07-24 15:50:47.133464] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:25:25.648 [2024-07-24 15:50:47.133475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.648 [2024-07-24 15:50:47.133485] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:25:25.648 [2024-07-24 15:50:47.133496] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:25:25.648 [2024-07-24 15:50:47.133521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:25:25.648 [2024-07-24 15:50:47.133533] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:25.648 [2024-07-24 15:50:47.133544] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:25.648 [2024-07-24 15:50:47.133554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:25.648 [2024-07-24 15:50:47.133565] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:25.648 [2024-07-24 15:50:47.133576] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:25:25.648 [2024-07-24 15:50:47.133587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:25.648 [2024-07-24 15:50:47.133598] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:25.648 [2024-07-24 15:50:47.133609] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:25.648 [2024-07-24 15:50:47.133619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:25.648 [2024-07-24 15:50:47.133630] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:25.648 [2024-07-24 15:50:47.133641] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:25:25.648 [2024-07-24 15:50:47.133652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:25.648 [2024-07-24 15:50:47.133662] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:25.648 [2024-07-24 15:50:47.133673] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:25.648 [2024-07-24 15:50:47.133684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:25.648 [2024-07-24 15:50:47.133694] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:25.648 [2024-07-24 15:50:47.133705] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:25:25.648 [2024-07-24 15:50:47.133716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:25.648 [2024-07-24 15:50:47.133726] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:25.648 [2024-07-24 15:50:47.133738] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:25.648 [2024-07-24 15:50:47.133749] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:25.648 [2024-07-24 15:50:47.133765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.648 [2024-07-24 15:50:47.133778] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:25.648 [2024-07-24 15:50:47.133789] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:25.648 [2024-07-24 15:50:47.133799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:25.648 [2024-07-24 15:50:47.133810] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:25.648 [2024-07-24 15:50:47.133821] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:25.648 [2024-07-24 15:50:47.133832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:25.648 [2024-07-24 15:50:47.133846] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:25.648 [2024-07-24 15:50:47.133861] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:25.648 [2024-07-24 15:50:47.133874] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:25.648 [2024-07-24 15:50:47.133887] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:25:25.648 [2024-07-24 15:50:47.133899] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:25:25.648 [2024-07-24 15:50:47.133911] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:25:25.648 [2024-07-24 15:50:47.133923] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:25:25.648 [2024-07-24 15:50:47.133935] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:25:25.648 [2024-07-24 15:50:47.133947] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:25:25.648 [2024-07-24 15:50:47.133959] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:25:25.648 [2024-07-24 15:50:47.133970] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:25:25.648 [2024-07-24 15:50:47.133982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:25:25.648 [2024-07-24 15:50:47.133994] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:25:25.648 [2024-07-24 15:50:47.134006] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:25:25.648 [2024-07-24 15:50:47.134019] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:25:25.648 [2024-07-24 15:50:47.134030] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:25.648 [2024-07-24 15:50:47.134043] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:25.648 [2024-07-24 15:50:47.134056] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:25.648 [2024-07-24 15:50:47.134067] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:25.648 [2024-07-24 15:50:47.134079] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:25.648 [2024-07-24 15:50:47.134108] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:25.648 [2024-07-24 15:50:47.134122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.648 [2024-07-24 15:50:47.134146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:25.648 [2024-07-24 15:50:47.134160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:25:25.648 [2024-07-24 15:50:47.134172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.648 [2024-07-24 15:50:47.153558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.648 [2024-07-24 15:50:47.153640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:25.648 [2024-07-24 15:50:47.153662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.324 ms 00:25:25.648 [2024-07-24 15:50:47.153688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.648 [2024-07-24 15:50:47.153799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.648 [2024-07-24 15:50:47.153821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:25.648 [2024-07-24 15:50:47.153833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:25.648 [2024-07-24 15:50:47.153845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.648 [2024-07-24 15:50:47.206734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.648 [2024-07-24 15:50:47.206801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:25.648 [2024-07-24 15:50:47.206823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.783 ms 00:25:25.648 [2024-07-24 15:50:47.206841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.648 [2024-07-24 15:50:47.206935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.648 [2024-07-24 15:50:47.206953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:25.648 [2024-07-24 15:50:47.206967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:25.648 [2024-07-24 15:50:47.206979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.648 [2024-07-24 15:50:47.207421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.648 [2024-07-24 15:50:47.207441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:25.648 [2024-07-24 15:50:47.207455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:25:25.648 [2024-07-24 15:50:47.207467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.648 [2024-07-24 15:50:47.207621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.648 [2024-07-24 15:50:47.207649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:25.648 [2024-07-24 15:50:47.207663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:25:25.648 [2024-07-24 15:50:47.207674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.648 [2024-07-24 15:50:47.225469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.649 [2024-07-24 15:50:47.225529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:25.649 [2024-07-24 15:50:47.225550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.763 ms 00:25:25.649 [2024-07-24 15:50:47.225563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.908 [2024-07-24 15:50:47.242577] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:25.908 [2024-07-24 15:50:47.242627] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:25.908 [2024-07-24 15:50:47.242648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.908 [2024-07-24 15:50:47.242662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:25.908 [2024-07-24 15:50:47.242677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.872 ms 00:25:25.908 [2024-07-24 15:50:47.242689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.908 [2024-07-24 15:50:47.273886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.908 [2024-07-24 15:50:47.274001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:25.908 [2024-07-24 15:50:47.274038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.132 ms 00:25:25.908 [2024-07-24 15:50:47.274051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.908 [2024-07-24 15:50:47.290692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.908 [2024-07-24 15:50:47.290752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:25.908 [2024-07-24 15:50:47.290772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.514 ms 00:25:25.908 [2024-07-24 15:50:47.290785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.908 [2024-07-24 15:50:47.306453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.908 [2024-07-24 15:50:47.306505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:25.908 [2024-07-24 15:50:47.306525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.616 ms 00:25:25.908 [2024-07-24 15:50:47.306537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.908 [2024-07-24 15:50:47.307161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.908 [2024-07-24 15:50:47.307213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:25.908 [2024-07-24 15:50:47.307242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:25:25.908 [2024-07-24 15:50:47.307264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.908 [2024-07-24 15:50:47.387294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.908 [2024-07-24 15:50:47.387408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:25.908 [2024-07-24 15:50:47.387463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.986 ms 00:25:25.908 [2024-07-24 15:50:47.387475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.908 [2024-07-24 15:50:47.400752] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:25.908 [2024-07-24 15:50:47.403605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.908 [2024-07-24 15:50:47.403643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:25.908 [2024-07-24 15:50:47.403679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.046 ms 00:25:25.908 [2024-07-24 15:50:47.403691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.908 [2024-07-24 15:50:47.403813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.908 [2024-07-24 15:50:47.403836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:25.908 [2024-07-24 15:50:47.403850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:25.908 [2024-07-24 15:50:47.403862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.908 [2024-07-24 15:50:47.405225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.908 [2024-07-24 15:50:47.405264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:25.908 [2024-07-24 15:50:47.405281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:25:25.908 [2024-07-24 15:50:47.405293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.908 [2024-07-24 15:50:47.407193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.908 [2024-07-24 15:50:47.407232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:25:25.908 [2024-07-24 15:50:47.407253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.857 ms 00:25:25.908 [2024-07-24 15:50:47.407264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.908 [2024-07-24 15:50:47.407303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.908 [2024-07-24 15:50:47.407319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:25.908 [2024-07-24 15:50:47.407332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:25.908 [2024-07-24 15:50:47.407351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.908 [2024-07-24 15:50:47.407395] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:25.908 [2024-07-24 15:50:47.407413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.908 [2024-07-24 15:50:47.407425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:25.908 [2024-07-24 15:50:47.407438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:25.908 [2024-07-24 15:50:47.407453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.908 [2024-07-24 15:50:47.439509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.908 [2024-07-24 15:50:47.439600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:25.908 [2024-07-24 15:50:47.439637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.029 ms 00:25:25.908 [2024-07-24 15:50:47.439665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.908 [2024-07-24 15:50:47.439770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.908 [2024-07-24 15:50:47.439796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:25.908 [2024-07-24 15:50:47.439809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:25.908 [2024-07-24 15:50:47.439820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.908 [2024-07-24 15:50:47.447458] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 344.172 ms, result 0 00:26:02.683  Copying: 840/1048576 [kB] (840 kBps) Copying: 4268/1048576 [kB] (3428 kBps) Copying: 25/1024 [MB] (20 MBps) Copying: 54/1024 [MB] (28 MBps) Copying: 83/1024 [MB] (29 MBps) Copying: 111/1024 [MB] (28 MBps) Copying: 140/1024 [MB] (28 MBps) Copying: 170/1024 [MB] (29 MBps) Copying: 200/1024 [MB] (30 MBps) Copying: 230/1024 [MB] (30 MBps) Copying: 260/1024 [MB] (30 MBps) Copying: 290/1024 [MB] (29 MBps) Copying: 319/1024 [MB] (29 MBps) Copying: 351/1024 [MB] (31 MBps) Copying: 380/1024 [MB] (29 MBps) Copying: 412/1024 [MB] (31 MBps) Copying: 441/1024 [MB] (28 MBps) Copying: 471/1024 [MB] (29 MBps) Copying: 503/1024 [MB] (31 MBps) Copying: 533/1024 [MB] (30 MBps) Copying: 563/1024 [MB] (29 MBps) Copying: 594/1024 [MB] (31 MBps) Copying: 625/1024 [MB] (30 MBps) Copying: 654/1024 [MB] (29 MBps) Copying: 684/1024 [MB] (29 MBps) Copying: 714/1024 [MB] (29 MBps) Copying: 743/1024 [MB] (29 MBps) Copying: 774/1024 [MB] (30 MBps) Copying: 805/1024 [MB] (31 MBps) Copying: 836/1024 [MB] (31 MBps) Copying: 867/1024 [MB] (30 MBps) Copying: 898/1024 [MB] (31 MBps) Copying: 929/1024 [MB] (31 MBps) Copying: 961/1024 [MB] (31 MBps) Copying: 992/1024 [MB] (31 MBps) Copying: 1022/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-07-24 15:51:24.040283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.683 [2024-07-24 15:51:24.040404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:02.683 [2024-07-24 15:51:24.040466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:02.683 [2024-07-24 15:51:24.040494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.683 [2024-07-24 15:51:24.040556] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:02.683 [2024-07-24 15:51:24.045814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.683 [2024-07-24 15:51:24.045863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:02.683 [2024-07-24 15:51:24.045883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.215 ms 00:26:02.683 [2024-07-24 15:51:24.045897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.683 [2024-07-24 15:51:24.046256] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.683 [2024-07-24 15:51:24.046289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:02.683 [2024-07-24 15:51:24.046314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:26:02.683 [2024-07-24 15:51:24.046328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.683 [2024-07-24 15:51:24.057838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.683 [2024-07-24 15:51:24.057895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:02.683 [2024-07-24 15:51:24.057918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.482 ms 00:26:02.683 [2024-07-24 15:51:24.057933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.683 [2024-07-24 15:51:24.066199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.683 [2024-07-24 15:51:24.066244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:26:02.683 [2024-07-24 15:51:24.066263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.217 ms 00:26:02.683 [2024-07-24 15:51:24.066287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.683 [2024-07-24 15:51:24.104136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.683 [2024-07-24 15:51:24.104195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:02.683 [2024-07-24 15:51:24.104217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.747 ms 00:26:02.683 [2024-07-24 15:51:24.104232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.683 [2024-07-24 15:51:24.124117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.683 [2024-07-24 15:51:24.124167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:02.683 [2024-07-24 15:51:24.124186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.828 ms 00:26:02.683 [2024-07-24 15:51:24.124199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.683 [2024-07-24 15:51:24.127458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.683 [2024-07-24 15:51:24.127507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:02.683 [2024-07-24 15:51:24.127525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.207 ms 00:26:02.683 [2024-07-24 15:51:24.127537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.683 [2024-07-24 15:51:24.158747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.683 [2024-07-24 15:51:24.158796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:02.683 [2024-07-24 15:51:24.158816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.177 ms 00:26:02.683 [2024-07-24 15:51:24.158827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.683 [2024-07-24 15:51:24.189771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.683 [2024-07-24 15:51:24.189817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:02.683 [2024-07-24 15:51:24.189835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.896 ms 00:26:02.683 [2024-07-24 15:51:24.189847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.683 [2024-07-24 15:51:24.220562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.683 [2024-07-24 15:51:24.220608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:02.683 [2024-07-24 15:51:24.220626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.669 ms 00:26:02.683 [2024-07-24 15:51:24.220638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.683 [2024-07-24 15:51:24.251302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.683 [2024-07-24 15:51:24.251346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:02.683 [2024-07-24 15:51:24.251364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.552 ms 00:26:02.683 [2024-07-24 15:51:24.251376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.683 [2024-07-24 15:51:24.251427] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:02.683 [2024-07-24 15:51:24.251451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:02.683 [2024-07-24 15:51:24.251466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:26:02.683 [2024-07-24 15:51:24.251479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:02.683 [2024-07-24 15:51:24.251735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.251992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:02.684 [2024-07-24 15:51:24.252715] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:02.684 [2024-07-24 15:51:24.252727] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: da504125-947e-4746-8024-4a00d4456ba1 00:26:02.684 [2024-07-24 15:51:24.252739] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:26:02.684 [2024-07-24 15:51:24.252750] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136640 00:26:02.684 [2024-07-24 15:51:24.252761] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134656 00:26:02.684 [2024-07-24 15:51:24.252781] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0147 00:26:02.684 [2024-07-24 15:51:24.252792] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:02.684 [2024-07-24 15:51:24.252804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:02.684 [2024-07-24 15:51:24.252815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:02.684 [2024-07-24 15:51:24.252826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:02.684 [2024-07-24 15:51:24.252836] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:02.684 [2024-07-24 15:51:24.252848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.684 [2024-07-24 15:51:24.252859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:02.684 [2024-07-24 15:51:24.252871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.428 ms 00:26:02.684 [2024-07-24 15:51:24.252882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.685 [2024-07-24 15:51:24.269373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.685 [2024-07-24 15:51:24.269414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:02.685 [2024-07-24 15:51:24.269439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.419 ms 00:26:02.685 [2024-07-24 15:51:24.269451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.685 [2024-07-24 15:51:24.269690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.685 [2024-07-24 15:51:24.269707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:02.685 [2024-07-24 15:51:24.269720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:26:02.685 [2024-07-24 15:51:24.269733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.943 [2024-07-24 15:51:24.315612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.943 [2024-07-24 15:51:24.315672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:02.944 [2024-07-24 15:51:24.315693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.944 [2024-07-24 15:51:24.315706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.944 [2024-07-24 15:51:24.315782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.944 [2024-07-24 15:51:24.315798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:02.944 [2024-07-24 15:51:24.315811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.944 [2024-07-24 15:51:24.315823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.944 [2024-07-24 15:51:24.315931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.944 [2024-07-24 15:51:24.315957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:02.944 [2024-07-24 15:51:24.315971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.944 [2024-07-24 15:51:24.315983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.944 [2024-07-24 15:51:24.316008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.944 [2024-07-24 15:51:24.316022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:02.944 [2024-07-24 15:51:24.316034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.944 [2024-07-24 15:51:24.316046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.944 [2024-07-24 15:51:24.414053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.944 [2024-07-24 15:51:24.414130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:02.944 [2024-07-24 15:51:24.414151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.944 [2024-07-24 15:51:24.414163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.944 [2024-07-24 15:51:24.453244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.944 [2024-07-24 15:51:24.453294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:02.944 [2024-07-24 15:51:24.453313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.944 [2024-07-24 15:51:24.453325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.944 [2024-07-24 15:51:24.453418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.944 [2024-07-24 15:51:24.453437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:02.944 [2024-07-24 15:51:24.453458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.944 [2024-07-24 15:51:24.453483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.944 [2024-07-24 15:51:24.453543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.944 [2024-07-24 15:51:24.453561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:02.944 [2024-07-24 15:51:24.453573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.944 [2024-07-24 15:51:24.453585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.944 [2024-07-24 15:51:24.453709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.944 [2024-07-24 15:51:24.453730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:02.944 [2024-07-24 15:51:24.453743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.944 [2024-07-24 15:51:24.453762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.944 [2024-07-24 15:51:24.453818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.944 [2024-07-24 15:51:24.453837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:02.944 [2024-07-24 15:51:24.453849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.944 [2024-07-24 15:51:24.453861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.944 [2024-07-24 15:51:24.453914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.944 [2024-07-24 15:51:24.453932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:02.944 [2024-07-24 15:51:24.453945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.944 [2024-07-24 15:51:24.453963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.944 [2024-07-24 15:51:24.454016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.944 [2024-07-24 15:51:24.454034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:02.944 [2024-07-24 15:51:24.454046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.944 [2024-07-24 15:51:24.454058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.944 [2024-07-24 15:51:24.454240] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 413.946 ms, result 0 00:26:04.321 00:26:04.321 00:26:04.321 15:51:25 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:06.223 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:06.223 15:51:27 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:06.482 [2024-07-24 15:51:27.872034] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:06.482 [2024-07-24 15:51:27.872207] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77633 ] 00:26:06.482 [2024-07-24 15:51:28.044461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.739 [2024-07-24 15:51:28.260649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.998 [2024-07-24 15:51:28.568767] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:06.998 [2024-07-24 15:51:28.568851] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:07.257 [2024-07-24 15:51:28.723031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.257 [2024-07-24 15:51:28.723107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:07.257 [2024-07-24 15:51:28.723130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:07.257 [2024-07-24 15:51:28.723143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.257 [2024-07-24 15:51:28.723226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.257 [2024-07-24 15:51:28.723252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:07.257 [2024-07-24 15:51:28.723266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:07.257 [2024-07-24 15:51:28.723277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.257 [2024-07-24 15:51:28.723312] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:07.257 [2024-07-24 15:51:28.724244] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:07.257 [2024-07-24 15:51:28.724287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.257 [2024-07-24 15:51:28.724303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:07.257 [2024-07-24 15:51:28.724316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:26:07.257 [2024-07-24 15:51:28.724328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.257 [2024-07-24 15:51:28.725460] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:07.257 [2024-07-24 15:51:28.741623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.257 [2024-07-24 15:51:28.741667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:07.257 [2024-07-24 15:51:28.741692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.164 ms 00:26:07.257 [2024-07-24 15:51:28.741705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.257 [2024-07-24 15:51:28.741774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.257 [2024-07-24 15:51:28.741794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:07.257 [2024-07-24 15:51:28.741807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:07.257 [2024-07-24 15:51:28.741829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.257 [2024-07-24 15:51:28.746180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.257 [2024-07-24 15:51:28.746227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:07.257 [2024-07-24 15:51:28.746244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.253 ms 00:26:07.257 [2024-07-24 15:51:28.746255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.257 [2024-07-24 15:51:28.746372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.257 [2024-07-24 15:51:28.746393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:07.257 [2024-07-24 15:51:28.746406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:26:07.258 [2024-07-24 15:51:28.746418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.258 [2024-07-24 15:51:28.746478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.258 [2024-07-24 15:51:28.746500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:07.258 [2024-07-24 15:51:28.746513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:07.258 [2024-07-24 15:51:28.746525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.258 [2024-07-24 15:51:28.746565] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:07.258 [2024-07-24 15:51:28.750818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.258 [2024-07-24 15:51:28.750856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:07.258 [2024-07-24 15:51:28.750872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.267 ms 00:26:07.258 [2024-07-24 15:51:28.750883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.258 [2024-07-24 15:51:28.750928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.258 [2024-07-24 15:51:28.750944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:07.258 [2024-07-24 15:51:28.750957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:07.258 [2024-07-24 15:51:28.750968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.258 [2024-07-24 15:51:28.751014] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:07.258 [2024-07-24 15:51:28.751050] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:26:07.258 [2024-07-24 15:51:28.751110] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:07.258 [2024-07-24 15:51:28.751136] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:26:07.258 [2024-07-24 15:51:28.751219] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:26:07.258 [2024-07-24 15:51:28.751242] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:07.258 [2024-07-24 15:51:28.751256] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:26:07.258 [2024-07-24 15:51:28.751272] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:07.258 [2024-07-24 15:51:28.751285] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:07.258 [2024-07-24 15:51:28.751303] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:07.258 [2024-07-24 15:51:28.751314] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:07.258 [2024-07-24 15:51:28.751325] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:26:07.258 [2024-07-24 15:51:28.751336] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:26:07.258 [2024-07-24 15:51:28.751348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.258 [2024-07-24 15:51:28.751360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:07.258 [2024-07-24 15:51:28.751372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:26:07.258 [2024-07-24 15:51:28.751383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.258 [2024-07-24 15:51:28.751456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.258 [2024-07-24 15:51:28.751477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:07.258 [2024-07-24 15:51:28.751495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:26:07.258 [2024-07-24 15:51:28.751506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.258 [2024-07-24 15:51:28.751597] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:07.258 [2024-07-24 15:51:28.751613] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:07.258 [2024-07-24 15:51:28.751625] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:07.258 [2024-07-24 15:51:28.751637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:07.258 [2024-07-24 15:51:28.751648] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:07.258 [2024-07-24 15:51:28.751659] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:07.258 [2024-07-24 15:51:28.751669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:07.258 [2024-07-24 15:51:28.751680] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:07.258 [2024-07-24 15:51:28.751691] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:07.258 [2024-07-24 15:51:28.751701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:07.258 [2024-07-24 15:51:28.751711] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:07.258 [2024-07-24 15:51:28.751722] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:07.258 [2024-07-24 15:51:28.751732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:07.258 [2024-07-24 15:51:28.751743] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:07.258 [2024-07-24 15:51:28.751753] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:26:07.258 [2024-07-24 15:51:28.751766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:07.258 [2024-07-24 15:51:28.751776] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:07.258 [2024-07-24 15:51:28.751787] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:26:07.258 [2024-07-24 15:51:28.751797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:07.258 [2024-07-24 15:51:28.751807] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:26:07.258 [2024-07-24 15:51:28.751818] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:26:07.258 [2024-07-24 15:51:28.751842] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:26:07.258 [2024-07-24 15:51:28.751853] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:07.258 [2024-07-24 15:51:28.751864] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:07.258 [2024-07-24 15:51:28.751874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:07.258 [2024-07-24 15:51:28.751884] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:07.258 [2024-07-24 15:51:28.751895] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:26:07.258 [2024-07-24 15:51:28.751905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:07.258 [2024-07-24 15:51:28.751924] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:07.258 [2024-07-24 15:51:28.751935] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:07.258 [2024-07-24 15:51:28.751945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:07.258 [2024-07-24 15:51:28.751955] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:07.258 [2024-07-24 15:51:28.751965] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:26:07.258 [2024-07-24 15:51:28.751977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:07.258 [2024-07-24 15:51:28.751987] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:07.258 [2024-07-24 15:51:28.751997] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:07.258 [2024-07-24 15:51:28.752008] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:07.258 [2024-07-24 15:51:28.752018] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:07.258 [2024-07-24 15:51:28.752029] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:26:07.258 [2024-07-24 15:51:28.752039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:07.258 [2024-07-24 15:51:28.752049] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:07.258 [2024-07-24 15:51:28.752060] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:07.258 [2024-07-24 15:51:28.752072] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:07.258 [2024-07-24 15:51:28.752103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:07.258 [2024-07-24 15:51:28.752116] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:07.258 [2024-07-24 15:51:28.752127] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:07.258 [2024-07-24 15:51:28.752138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:07.258 [2024-07-24 15:51:28.752149] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:07.258 [2024-07-24 15:51:28.752160] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:07.258 [2024-07-24 15:51:28.752170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:07.258 [2024-07-24 15:51:28.752182] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:07.258 [2024-07-24 15:51:28.752196] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:07.258 [2024-07-24 15:51:28.752225] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:07.258 [2024-07-24 15:51:28.752237] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:26:07.258 [2024-07-24 15:51:28.752249] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:26:07.258 [2024-07-24 15:51:28.752260] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:26:07.258 [2024-07-24 15:51:28.752272] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:26:07.258 [2024-07-24 15:51:28.752283] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:26:07.258 [2024-07-24 15:51:28.752294] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:26:07.258 [2024-07-24 15:51:28.752306] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:26:07.258 [2024-07-24 15:51:28.752317] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:26:07.258 [2024-07-24 15:51:28.752328] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:26:07.258 [2024-07-24 15:51:28.752340] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:26:07.258 [2024-07-24 15:51:28.752352] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:26:07.258 [2024-07-24 15:51:28.752363] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:26:07.259 [2024-07-24 15:51:28.752374] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:07.259 [2024-07-24 15:51:28.752387] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:07.259 [2024-07-24 15:51:28.752399] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:07.259 [2024-07-24 15:51:28.752410] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:07.259 [2024-07-24 15:51:28.752422] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:07.259 [2024-07-24 15:51:28.752433] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:07.259 [2024-07-24 15:51:28.752446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.259 [2024-07-24 15:51:28.752458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:07.259 [2024-07-24 15:51:28.752469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.894 ms 00:26:07.259 [2024-07-24 15:51:28.752480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.259 [2024-07-24 15:51:28.770604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.259 [2024-07-24 15:51:28.770654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:07.259 [2024-07-24 15:51:28.770673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.046 ms 00:26:07.259 [2024-07-24 15:51:28.770695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.259 [2024-07-24 15:51:28.770803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.259 [2024-07-24 15:51:28.770825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:07.259 [2024-07-24 15:51:28.770838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:07.259 [2024-07-24 15:51:28.770850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.259 [2024-07-24 15:51:28.819014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.259 [2024-07-24 15:51:28.819118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:07.259 [2024-07-24 15:51:28.819142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.088 ms 00:26:07.259 [2024-07-24 15:51:28.819160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.259 [2024-07-24 15:51:28.819235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.259 [2024-07-24 15:51:28.819253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:07.259 [2024-07-24 15:51:28.819266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:07.259 [2024-07-24 15:51:28.819278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.259 [2024-07-24 15:51:28.819665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.259 [2024-07-24 15:51:28.819692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:07.259 [2024-07-24 15:51:28.819706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:26:07.259 [2024-07-24 15:51:28.819718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.259 [2024-07-24 15:51:28.819874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.259 [2024-07-24 15:51:28.819893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:07.259 [2024-07-24 15:51:28.819906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:26:07.259 [2024-07-24 15:51:28.819917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.259 [2024-07-24 15:51:28.836949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.259 [2024-07-24 15:51:28.836996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:07.259 [2024-07-24 15:51:28.837031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.004 ms 00:26:07.259 [2024-07-24 15:51:28.837054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.517 [2024-07-24 15:51:28.853385] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:07.517 [2024-07-24 15:51:28.853429] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:07.517 [2024-07-24 15:51:28.853448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.517 [2024-07-24 15:51:28.853460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:07.517 [2024-07-24 15:51:28.853474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.221 ms 00:26:07.517 [2024-07-24 15:51:28.853485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.517 [2024-07-24 15:51:28.883263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.517 [2024-07-24 15:51:28.883321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:07.517 [2024-07-24 15:51:28.883357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.729 ms 00:26:07.517 [2024-07-24 15:51:28.883370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.517 [2024-07-24 15:51:28.899131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.517 [2024-07-24 15:51:28.899171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:07.517 [2024-07-24 15:51:28.899188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.702 ms 00:26:07.517 [2024-07-24 15:51:28.899200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.517 [2024-07-24 15:51:28.914606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.517 [2024-07-24 15:51:28.914647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:07.517 [2024-07-24 15:51:28.914664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.362 ms 00:26:07.518 [2024-07-24 15:51:28.914675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.518 [2024-07-24 15:51:28.915185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.518 [2024-07-24 15:51:28.915215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:07.518 [2024-07-24 15:51:28.915229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:26:07.518 [2024-07-24 15:51:28.915241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.518 [2024-07-24 15:51:28.991385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.518 [2024-07-24 15:51:28.991454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:07.518 [2024-07-24 15:51:28.991476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.119 ms 00:26:07.518 [2024-07-24 15:51:28.991489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.518 [2024-07-24 15:51:29.004156] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:07.518 [2024-07-24 15:51:29.006525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.518 [2024-07-24 15:51:29.006559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:07.518 [2024-07-24 15:51:29.006593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.967 ms 00:26:07.518 [2024-07-24 15:51:29.006605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.518 [2024-07-24 15:51:29.006724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.518 [2024-07-24 15:51:29.006748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:07.518 [2024-07-24 15:51:29.006762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:07.518 [2024-07-24 15:51:29.006773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.518 [2024-07-24 15:51:29.007434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.518 [2024-07-24 15:51:29.007466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:07.518 [2024-07-24 15:51:29.007482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:26:07.518 [2024-07-24 15:51:29.007494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.518 [2024-07-24 15:51:29.009452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.518 [2024-07-24 15:51:29.009491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:26:07.518 [2024-07-24 15:51:29.009510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.925 ms 00:26:07.518 [2024-07-24 15:51:29.009522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.518 [2024-07-24 15:51:29.009559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.518 [2024-07-24 15:51:29.009574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:07.518 [2024-07-24 15:51:29.009587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:07.518 [2024-07-24 15:51:29.009605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.518 [2024-07-24 15:51:29.009648] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:07.518 [2024-07-24 15:51:29.009665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.518 [2024-07-24 15:51:29.009677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:07.518 [2024-07-24 15:51:29.009689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:07.518 [2024-07-24 15:51:29.009704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.518 [2024-07-24 15:51:29.040875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.518 [2024-07-24 15:51:29.040917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:07.518 [2024-07-24 15:51:29.040951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.146 ms 00:26:07.518 [2024-07-24 15:51:29.040963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.518 [2024-07-24 15:51:29.041046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.518 [2024-07-24 15:51:29.041073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:07.518 [2024-07-24 15:51:29.041111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:07.518 [2024-07-24 15:51:29.041127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.518 [2024-07-24 15:51:29.042266] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 318.728 ms, result 0 00:26:44.844  Copying: 28/1024 [MB] (28 MBps) Copying: 56/1024 [MB] (27 MBps) Copying: 83/1024 [MB] (27 MBps) Copying: 110/1024 [MB] (26 MBps) Copying: 138/1024 [MB] (28 MBps) Copying: 164/1024 [MB] (25 MBps) Copying: 192/1024 [MB] (28 MBps) Copying: 219/1024 [MB] (26 MBps) Copying: 247/1024 [MB] (28 MBps) Copying: 278/1024 [MB] (30 MBps) Copying: 306/1024 [MB] (27 MBps) Copying: 336/1024 [MB] (30 MBps) Copying: 365/1024 [MB] (28 MBps) Copying: 392/1024 [MB] (27 MBps) Copying: 423/1024 [MB] (30 MBps) Copying: 453/1024 [MB] (29 MBps) Copying: 481/1024 [MB] (27 MBps) Copying: 511/1024 [MB] (30 MBps) Copying: 539/1024 [MB] (28 MBps) Copying: 568/1024 [MB] (28 MBps) Copying: 595/1024 [MB] (27 MBps) Copying: 625/1024 [MB] (29 MBps) Copying: 653/1024 [MB] (28 MBps) Copying: 680/1024 [MB] (27 MBps) Copying: 707/1024 [MB] (26 MBps) Copying: 732/1024 [MB] (25 MBps) Copying: 760/1024 [MB] (27 MBps) Copying: 786/1024 [MB] (26 MBps) Copying: 814/1024 [MB] (27 MBps) Copying: 841/1024 [MB] (26 MBps) Copying: 865/1024 [MB] (23 MBps) Copying: 893/1024 [MB] (28 MBps) Copying: 919/1024 [MB] (26 MBps) Copying: 946/1024 [MB] (27 MBps) Copying: 972/1024 [MB] (25 MBps) Copying: 999/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-24 15:52:06.108070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.844 [2024-07-24 15:52:06.108169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:44.844 [2024-07-24 15:52:06.108195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:44.844 [2024-07-24 15:52:06.108209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.844 [2024-07-24 15:52:06.108244] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:44.844 [2024-07-24 15:52:06.112991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.844 [2024-07-24 15:52:06.113099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:44.844 [2024-07-24 15:52:06.113136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.711 ms 00:26:44.844 [2024-07-24 15:52:06.113171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.844 [2024-07-24 15:52:06.113618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.844 [2024-07-24 15:52:06.113676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:44.844 [2024-07-24 15:52:06.113704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:26:44.844 [2024-07-24 15:52:06.113724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.845 [2024-07-24 15:52:06.118115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.845 [2024-07-24 15:52:06.118183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:44.845 [2024-07-24 15:52:06.118216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.352 ms 00:26:44.845 [2024-07-24 15:52:06.118240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.845 [2024-07-24 15:52:06.127254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.845 [2024-07-24 15:52:06.127322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:26:44.845 [2024-07-24 15:52:06.127355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.955 ms 00:26:44.845 [2024-07-24 15:52:06.127379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.845 [2024-07-24 15:52:06.160806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.845 [2024-07-24 15:52:06.160867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:44.845 [2024-07-24 15:52:06.160887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.279 ms 00:26:44.845 [2024-07-24 15:52:06.160911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.845 [2024-07-24 15:52:06.180436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.845 [2024-07-24 15:52:06.180498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:44.845 [2024-07-24 15:52:06.180520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.472 ms 00:26:44.845 [2024-07-24 15:52:06.180533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.845 [2024-07-24 15:52:06.183163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.845 [2024-07-24 15:52:06.183215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:44.845 [2024-07-24 15:52:06.183234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.567 ms 00:26:44.845 [2024-07-24 15:52:06.183247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.845 [2024-07-24 15:52:06.214741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.845 [2024-07-24 15:52:06.214808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:44.845 [2024-07-24 15:52:06.214829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.468 ms 00:26:44.845 [2024-07-24 15:52:06.214841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.845 [2024-07-24 15:52:06.249147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.845 [2024-07-24 15:52:06.249222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:44.845 [2024-07-24 15:52:06.249244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.245 ms 00:26:44.845 [2024-07-24 15:52:06.249257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.845 [2024-07-24 15:52:06.281586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.845 [2024-07-24 15:52:06.281657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:44.845 [2024-07-24 15:52:06.281680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.250 ms 00:26:44.845 [2024-07-24 15:52:06.281692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.845 [2024-07-24 15:52:06.312874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.845 [2024-07-24 15:52:06.312939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:44.845 [2024-07-24 15:52:06.312961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.034 ms 00:26:44.845 [2024-07-24 15:52:06.312974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.845 [2024-07-24 15:52:06.313029] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:44.845 [2024-07-24 15:52:06.313054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:44.845 [2024-07-24 15:52:06.313069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:26:44.845 [2024-07-24 15:52:06.313082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:44.845 [2024-07-24 15:52:06.313791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.313803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.313815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.313827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.313838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.313850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.313862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.313874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.313886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.313898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.313910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.313922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.313934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.313945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.313957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.313970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.313981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.313993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:44.846 [2024-07-24 15:52:06.314322] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:44.846 [2024-07-24 15:52:06.314334] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: da504125-947e-4746-8024-4a00d4456ba1 00:26:44.846 [2024-07-24 15:52:06.314356] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:26:44.846 [2024-07-24 15:52:06.314367] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:44.846 [2024-07-24 15:52:06.314378] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:44.846 [2024-07-24 15:52:06.314390] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:44.846 [2024-07-24 15:52:06.314401] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:44.846 [2024-07-24 15:52:06.314414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:44.846 [2024-07-24 15:52:06.314425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:44.846 [2024-07-24 15:52:06.314435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:44.846 [2024-07-24 15:52:06.314445] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:44.846 [2024-07-24 15:52:06.314456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.846 [2024-07-24 15:52:06.314468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:44.846 [2024-07-24 15:52:06.314480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.429 ms 00:26:44.846 [2024-07-24 15:52:06.314505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.846 [2024-07-24 15:52:06.331132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.846 [2024-07-24 15:52:06.331185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:44.846 [2024-07-24 15:52:06.331204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.554 ms 00:26:44.846 [2024-07-24 15:52:06.331216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.846 [2024-07-24 15:52:06.331512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.846 [2024-07-24 15:52:06.331555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:44.846 [2024-07-24 15:52:06.331591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:26:44.846 [2024-07-24 15:52:06.331612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.846 [2024-07-24 15:52:06.378093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.846 [2024-07-24 15:52:06.378166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:44.846 [2024-07-24 15:52:06.378187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.846 [2024-07-24 15:52:06.378200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.846 [2024-07-24 15:52:06.378285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.846 [2024-07-24 15:52:06.378301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:44.846 [2024-07-24 15:52:06.378321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.846 [2024-07-24 15:52:06.378333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.846 [2024-07-24 15:52:06.378447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.846 [2024-07-24 15:52:06.378468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:44.846 [2024-07-24 15:52:06.378481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.846 [2024-07-24 15:52:06.378492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.846 [2024-07-24 15:52:06.378516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.846 [2024-07-24 15:52:06.378531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:44.846 [2024-07-24 15:52:06.378543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.846 [2024-07-24 15:52:06.378561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.104 [2024-07-24 15:52:06.478947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.104 [2024-07-24 15:52:06.479002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:45.104 [2024-07-24 15:52:06.479022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.104 [2024-07-24 15:52:06.479034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.104 [2024-07-24 15:52:06.522650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.104 [2024-07-24 15:52:06.522755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:45.104 [2024-07-24 15:52:06.522785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.104 [2024-07-24 15:52:06.522817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.104 [2024-07-24 15:52:06.522961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.104 [2024-07-24 15:52:06.522986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:45.104 [2024-07-24 15:52:06.523005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.104 [2024-07-24 15:52:06.523022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.104 [2024-07-24 15:52:06.523136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.104 [2024-07-24 15:52:06.523163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:45.104 [2024-07-24 15:52:06.523182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.104 [2024-07-24 15:52:06.523199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.104 [2024-07-24 15:52:06.523368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.104 [2024-07-24 15:52:06.523394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:45.104 [2024-07-24 15:52:06.523412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.104 [2024-07-24 15:52:06.523429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.104 [2024-07-24 15:52:06.523503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.104 [2024-07-24 15:52:06.523528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:45.104 [2024-07-24 15:52:06.523545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.104 [2024-07-24 15:52:06.523562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.104 [2024-07-24 15:52:06.523629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.104 [2024-07-24 15:52:06.523652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:45.104 [2024-07-24 15:52:06.523670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.104 [2024-07-24 15:52:06.523686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.104 [2024-07-24 15:52:06.523761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.104 [2024-07-24 15:52:06.523785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:45.105 [2024-07-24 15:52:06.523803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.105 [2024-07-24 15:52:06.523820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.105 [2024-07-24 15:52:06.524011] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 415.909 ms, result 0 00:26:46.477 00:26:46.477 00:26:46.477 15:52:07 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:49.007 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:49.007 15:52:09 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:49.007 15:52:09 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:49.007 15:52:09 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:49.007 15:52:09 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:49.007 15:52:10 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:49.007 15:52:10 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:49.007 15:52:10 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:49.007 Process with pid 75782 is not found 00:26:49.007 15:52:10 -- ftl/dirty_shutdown.sh@37 -- # killprocess 75782 00:26:49.007 15:52:10 -- common/autotest_common.sh@926 -- # '[' -z 75782 ']' 00:26:49.007 15:52:10 -- common/autotest_common.sh@930 -- # kill -0 75782 00:26:49.007 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (75782) - No such process 00:26:49.007 15:52:10 -- common/autotest_common.sh@953 -- # echo 'Process with pid 75782 is not found' 00:26:49.007 15:52:10 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:49.007 Remove shared memory files 00:26:49.007 15:52:10 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:49.007 15:52:10 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:49.007 15:52:10 -- ftl/common.sh@205 -- # rm -f rm -f 00:26:49.007 15:52:10 -- ftl/common.sh@206 -- # rm -f rm -f 00:26:49.007 15:52:10 -- ftl/common.sh@207 -- # rm -f rm -f 00:26:49.007 15:52:10 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:49.007 15:52:10 -- ftl/common.sh@209 -- # rm -f rm -f 00:26:49.007 ************************************ 00:26:49.007 END TEST ftl_dirty_shutdown 00:26:49.007 ************************************ 00:26:49.007 00:26:49.007 real 3m41.243s 00:26:49.007 user 4m13.719s 00:26:49.007 sys 0m37.488s 00:26:49.007 15:52:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:49.007 15:52:10 -- common/autotest_common.sh@10 -- # set +x 00:26:49.266 15:52:10 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:49.266 15:52:10 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:26:49.266 15:52:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:49.266 15:52:10 -- common/autotest_common.sh@10 -- # set +x 00:26:49.266 ************************************ 00:26:49.266 START TEST ftl_upgrade_shutdown 00:26:49.266 ************************************ 00:26:49.266 15:52:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:49.266 * Looking for test storage... 00:26:49.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:49.266 15:52:10 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:49.266 15:52:10 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:49.266 15:52:10 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:49.266 15:52:10 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:49.266 15:52:10 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:49.266 15:52:10 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:49.266 15:52:10 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:49.266 15:52:10 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:49.266 15:52:10 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:49.266 15:52:10 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:49.266 15:52:10 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:49.266 15:52:10 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:49.266 15:52:10 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:49.266 15:52:10 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:49.266 15:52:10 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:49.266 15:52:10 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:49.266 15:52:10 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:49.266 15:52:10 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:49.266 15:52:10 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:49.266 15:52:10 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:49.266 15:52:10 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:49.266 15:52:10 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:49.266 15:52:10 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:49.266 15:52:10 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:49.266 15:52:10 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:49.266 15:52:10 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:49.266 15:52:10 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:49.266 15:52:10 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:49.266 15:52:10 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:49.266 15:52:10 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:49.266 15:52:10 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:49.266 15:52:10 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:49.266 15:52:10 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:26:49.266 15:52:10 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:26:49.266 15:52:10 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:49.266 15:52:10 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:49.266 15:52:10 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:26:49.266 15:52:10 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:26:49.266 15:52:10 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:49.266 15:52:10 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:49.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.266 15:52:10 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:49.266 15:52:10 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:49.266 15:52:10 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:49.266 15:52:10 -- ftl/common.sh@81 -- # local base_bdev= 00:26:49.266 15:52:10 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:49.266 15:52:10 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:49.266 15:52:10 -- ftl/common.sh@89 -- # spdk_tgt_pid=78123 00:26:49.266 15:52:10 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:49.266 15:52:10 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:49.266 15:52:10 -- ftl/common.sh@91 -- # waitforlisten 78123 00:26:49.266 15:52:10 -- common/autotest_common.sh@819 -- # '[' -z 78123 ']' 00:26:49.266 15:52:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.266 15:52:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:49.266 15:52:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.266 15:52:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:49.266 15:52:10 -- common/autotest_common.sh@10 -- # set +x 00:26:49.266 [2024-07-24 15:52:10.850623] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:49.266 [2024-07-24 15:52:10.850819] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78123 ] 00:26:49.524 [2024-07-24 15:52:11.019787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.782 [2024-07-24 15:52:11.203113] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:49.782 [2024-07-24 15:52:11.203356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.155 15:52:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:51.155 15:52:12 -- common/autotest_common.sh@852 -- # return 0 00:26:51.155 15:52:12 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:51.155 15:52:12 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:51.155 15:52:12 -- ftl/common.sh@99 -- # local params 00:26:51.155 15:52:12 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:51.155 15:52:12 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:51.155 15:52:12 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:51.155 15:52:12 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:26:51.155 15:52:12 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:51.155 15:52:12 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:51.155 15:52:12 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:51.155 15:52:12 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:26:51.155 15:52:12 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:51.155 15:52:12 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:51.155 15:52:12 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:51.155 15:52:12 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:51.155 15:52:12 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:26:51.155 15:52:12 -- ftl/common.sh@54 -- # local name=base 00:26:51.155 15:52:12 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:26:51.155 15:52:12 -- ftl/common.sh@56 -- # local size=20480 00:26:51.155 15:52:12 -- ftl/common.sh@59 -- # local base_bdev 00:26:51.155 15:52:12 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:26:51.414 15:52:12 -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:51.414 15:52:12 -- ftl/common.sh@62 -- # local base_size 00:26:51.414 15:52:12 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:51.414 15:52:12 -- common/autotest_common.sh@1357 -- # local bdev_name=basen1 00:26:51.414 15:52:12 -- common/autotest_common.sh@1358 -- # local bdev_info 00:26:51.414 15:52:12 -- common/autotest_common.sh@1359 -- # local bs 00:26:51.414 15:52:12 -- common/autotest_common.sh@1360 -- # local nb 00:26:51.414 15:52:12 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:51.672 15:52:13 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:26:51.672 { 00:26:51.672 "name": "basen1", 00:26:51.672 "aliases": [ 00:26:51.672 "8e5f2c5e-bab1-466a-9812-a752ad2bc22d" 00:26:51.672 ], 00:26:51.672 "product_name": "NVMe disk", 00:26:51.672 "block_size": 4096, 00:26:51.672 "num_blocks": 1310720, 00:26:51.672 "uuid": "8e5f2c5e-bab1-466a-9812-a752ad2bc22d", 00:26:51.672 "assigned_rate_limits": { 00:26:51.672 "rw_ios_per_sec": 0, 00:26:51.672 "rw_mbytes_per_sec": 0, 00:26:51.672 "r_mbytes_per_sec": 0, 00:26:51.672 "w_mbytes_per_sec": 0 00:26:51.672 }, 00:26:51.672 "claimed": true, 00:26:51.672 "claim_type": "read_many_write_one", 00:26:51.672 "zoned": false, 00:26:51.672 "supported_io_types": { 00:26:51.672 "read": true, 00:26:51.672 "write": true, 00:26:51.672 "unmap": true, 00:26:51.672 "write_zeroes": true, 00:26:51.672 "flush": true, 00:26:51.672 "reset": true, 00:26:51.672 "compare": true, 00:26:51.672 "compare_and_write": false, 00:26:51.672 "abort": true, 00:26:51.672 "nvme_admin": true, 00:26:51.672 "nvme_io": true 00:26:51.672 }, 00:26:51.672 "driver_specific": { 00:26:51.672 "nvme": [ 00:26:51.672 { 00:26:51.672 "pci_address": "0000:00:07.0", 00:26:51.672 "trid": { 00:26:51.672 "trtype": "PCIe", 00:26:51.672 "traddr": "0000:00:07.0" 00:26:51.672 }, 00:26:51.672 "ctrlr_data": { 00:26:51.672 "cntlid": 0, 00:26:51.672 "vendor_id": "0x1b36", 00:26:51.672 "model_number": "QEMU NVMe Ctrl", 00:26:51.672 "serial_number": "12341", 00:26:51.672 "firmware_revision": "8.0.0", 00:26:51.672 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:51.672 "oacs": { 00:26:51.672 "security": 0, 00:26:51.672 "format": 1, 00:26:51.672 "firmware": 0, 00:26:51.672 "ns_manage": 1 00:26:51.672 }, 00:26:51.672 "multi_ctrlr": false, 00:26:51.672 "ana_reporting": false 00:26:51.672 }, 00:26:51.672 "vs": { 00:26:51.672 "nvme_version": "1.4" 00:26:51.672 }, 00:26:51.672 "ns_data": { 00:26:51.672 "id": 1, 00:26:51.672 "can_share": false 00:26:51.672 } 00:26:51.672 } 00:26:51.672 ], 00:26:51.672 "mp_policy": "active_passive" 00:26:51.672 } 00:26:51.672 } 00:26:51.672 ]' 00:26:51.672 15:52:13 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:26:51.672 15:52:13 -- common/autotest_common.sh@1362 -- # bs=4096 00:26:51.672 15:52:13 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:26:51.672 15:52:13 -- common/autotest_common.sh@1363 -- # nb=1310720 00:26:51.672 15:52:13 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:26:51.672 15:52:13 -- common/autotest_common.sh@1367 -- # echo 5120 00:26:51.672 15:52:13 -- ftl/common.sh@63 -- # base_size=5120 00:26:51.672 15:52:13 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:51.672 15:52:13 -- ftl/common.sh@67 -- # clear_lvols 00:26:51.672 15:52:13 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:51.672 15:52:13 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:51.931 15:52:13 -- ftl/common.sh@28 -- # stores=b3157fe7-3b40-4664-8423-1ce1aa7c9996 00:26:51.931 15:52:13 -- ftl/common.sh@29 -- # for lvs in $stores 00:26:51.931 15:52:13 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b3157fe7-3b40-4664-8423-1ce1aa7c9996 00:26:52.189 15:52:13 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:52.448 15:52:14 -- ftl/common.sh@68 -- # lvs=0a5aaa92-7755-47d5-96e5-4e7831cc579a 00:26:52.448 15:52:14 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 0a5aaa92-7755-47d5-96e5-4e7831cc579a 00:26:52.706 15:52:14 -- ftl/common.sh@107 -- # base_bdev=a16de9ad-ee56-4411-9492-b366011688b3 00:26:52.706 15:52:14 -- ftl/common.sh@108 -- # [[ -z a16de9ad-ee56-4411-9492-b366011688b3 ]] 00:26:52.706 15:52:14 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 a16de9ad-ee56-4411-9492-b366011688b3 5120 00:26:52.706 15:52:14 -- ftl/common.sh@35 -- # local name=cache 00:26:52.706 15:52:14 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:26:52.706 15:52:14 -- ftl/common.sh@37 -- # local base_bdev=a16de9ad-ee56-4411-9492-b366011688b3 00:26:52.706 15:52:14 -- ftl/common.sh@38 -- # local cache_size=5120 00:26:52.706 15:52:14 -- ftl/common.sh@41 -- # get_bdev_size a16de9ad-ee56-4411-9492-b366011688b3 00:26:52.706 15:52:14 -- common/autotest_common.sh@1357 -- # local bdev_name=a16de9ad-ee56-4411-9492-b366011688b3 00:26:52.706 15:52:14 -- common/autotest_common.sh@1358 -- # local bdev_info 00:26:52.706 15:52:14 -- common/autotest_common.sh@1359 -- # local bs 00:26:52.706 15:52:14 -- common/autotest_common.sh@1360 -- # local nb 00:26:52.706 15:52:14 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a16de9ad-ee56-4411-9492-b366011688b3 00:26:52.964 15:52:14 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:26:52.964 { 00:26:52.964 "name": "a16de9ad-ee56-4411-9492-b366011688b3", 00:26:52.964 "aliases": [ 00:26:52.964 "lvs/basen1p0" 00:26:52.964 ], 00:26:52.964 "product_name": "Logical Volume", 00:26:52.964 "block_size": 4096, 00:26:52.964 "num_blocks": 5242880, 00:26:52.964 "uuid": "a16de9ad-ee56-4411-9492-b366011688b3", 00:26:52.964 "assigned_rate_limits": { 00:26:52.964 "rw_ios_per_sec": 0, 00:26:52.965 "rw_mbytes_per_sec": 0, 00:26:52.965 "r_mbytes_per_sec": 0, 00:26:52.965 "w_mbytes_per_sec": 0 00:26:52.965 }, 00:26:52.965 "claimed": false, 00:26:52.965 "zoned": false, 00:26:52.965 "supported_io_types": { 00:26:52.965 "read": true, 00:26:52.965 "write": true, 00:26:52.965 "unmap": true, 00:26:52.965 "write_zeroes": true, 00:26:52.965 "flush": false, 00:26:52.965 "reset": true, 00:26:52.965 "compare": false, 00:26:52.965 "compare_and_write": false, 00:26:52.965 "abort": false, 00:26:52.965 "nvme_admin": false, 00:26:52.965 "nvme_io": false 00:26:52.965 }, 00:26:52.965 "driver_specific": { 00:26:52.965 "lvol": { 00:26:52.965 "lvol_store_uuid": "0a5aaa92-7755-47d5-96e5-4e7831cc579a", 00:26:52.965 "base_bdev": "basen1", 00:26:52.965 "thin_provision": true, 00:26:52.965 "snapshot": false, 00:26:52.965 "clone": false, 00:26:52.965 "esnap_clone": false 00:26:52.965 } 00:26:52.965 } 00:26:52.965 } 00:26:52.965 ]' 00:26:52.965 15:52:14 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:26:53.222 15:52:14 -- common/autotest_common.sh@1362 -- # bs=4096 00:26:53.222 15:52:14 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:26:53.222 15:52:14 -- common/autotest_common.sh@1363 -- # nb=5242880 00:26:53.222 15:52:14 -- common/autotest_common.sh@1366 -- # bdev_size=20480 00:26:53.222 15:52:14 -- common/autotest_common.sh@1367 -- # echo 20480 00:26:53.223 15:52:14 -- ftl/common.sh@41 -- # local base_size=1024 00:26:53.223 15:52:14 -- ftl/common.sh@44 -- # local nvc_bdev 00:26:53.223 15:52:14 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:26:53.481 15:52:14 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:53.481 15:52:14 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:53.481 15:52:14 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:53.739 15:52:15 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:53.739 15:52:15 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:53.739 15:52:15 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d a16de9ad-ee56-4411-9492-b366011688b3 -c cachen1p0 --l2p_dram_limit 2 00:26:53.998 [2024-07-24 15:52:15.395054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.998 [2024-07-24 15:52:15.395129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:53.998 [2024-07-24 15:52:15.395155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:53.998 [2024-07-24 15:52:15.395170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.998 [2024-07-24 15:52:15.395259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.998 [2024-07-24 15:52:15.395277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:53.998 [2024-07-24 15:52:15.395294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:26:53.998 [2024-07-24 15:52:15.395307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.998 [2024-07-24 15:52:15.395340] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:53.998 [2024-07-24 15:52:15.396485] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:53.998 [2024-07-24 15:52:15.396752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.998 [2024-07-24 15:52:15.396876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:53.998 [2024-07-24 15:52:15.396998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.412 ms 00:26:53.998 [2024-07-24 15:52:15.397049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.998 [2024-07-24 15:52:15.397322] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 6745893d-90a2-4d28-96e3-df48f8e2d21c 00:26:53.998 [2024-07-24 15:52:15.398525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.998 [2024-07-24 15:52:15.398698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:53.998 [2024-07-24 15:52:15.398824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:53.998 [2024-07-24 15:52:15.398889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.998 [2024-07-24 15:52:15.403628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.998 [2024-07-24 15:52:15.403688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:53.998 [2024-07-24 15:52:15.403707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.551 ms 00:26:53.998 [2024-07-24 15:52:15.403722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.998 [2024-07-24 15:52:15.403787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.998 [2024-07-24 15:52:15.403808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:53.998 [2024-07-24 15:52:15.403822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:26:53.998 [2024-07-24 15:52:15.403839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.998 [2024-07-24 15:52:15.403916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.998 [2024-07-24 15:52:15.403937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:53.998 [2024-07-24 15:52:15.403951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:53.998 [2024-07-24 15:52:15.403968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.998 [2024-07-24 15:52:15.404008] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:53.998 [2024-07-24 15:52:15.408557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.998 [2024-07-24 15:52:15.408601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:53.998 [2024-07-24 15:52:15.408622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.561 ms 00:26:53.998 [2024-07-24 15:52:15.408635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.998 [2024-07-24 15:52:15.408677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.998 [2024-07-24 15:52:15.408691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:53.998 [2024-07-24 15:52:15.408707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:53.998 [2024-07-24 15:52:15.408719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.998 [2024-07-24 15:52:15.408777] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:53.998 [2024-07-24 15:52:15.408911] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:53.998 [2024-07-24 15:52:15.408935] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:53.998 [2024-07-24 15:52:15.408951] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:53.998 [2024-07-24 15:52:15.408969] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:53.998 [2024-07-24 15:52:15.408983] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:53.998 [2024-07-24 15:52:15.408999] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:53.998 [2024-07-24 15:52:15.409011] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:53.998 [2024-07-24 15:52:15.409025] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:53.998 [2024-07-24 15:52:15.409041] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:53.998 [2024-07-24 15:52:15.409055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.998 [2024-07-24 15:52:15.409067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:53.998 [2024-07-24 15:52:15.409082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.281 ms 00:26:53.998 [2024-07-24 15:52:15.409117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.998 [2024-07-24 15:52:15.409195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.998 [2024-07-24 15:52:15.409210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:53.998 [2024-07-24 15:52:15.409238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:26:53.998 [2024-07-24 15:52:15.409250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.998 [2024-07-24 15:52:15.409341] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:53.998 [2024-07-24 15:52:15.409357] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:53.998 [2024-07-24 15:52:15.409377] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:53.998 [2024-07-24 15:52:15.409398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:53.998 [2024-07-24 15:52:15.409413] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:53.998 [2024-07-24 15:52:15.409424] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:53.998 [2024-07-24 15:52:15.409438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:53.998 [2024-07-24 15:52:15.409450] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:53.998 [2024-07-24 15:52:15.409463] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:53.998 [2024-07-24 15:52:15.409475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:53.998 [2024-07-24 15:52:15.409489] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:53.998 [2024-07-24 15:52:15.409501] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:53.998 [2024-07-24 15:52:15.409516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:53.998 [2024-07-24 15:52:15.409528] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:53.998 [2024-07-24 15:52:15.409542] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:53.998 [2024-07-24 15:52:15.409554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:53.998 [2024-07-24 15:52:15.409569] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:53.998 [2024-07-24 15:52:15.409581] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:53.998 [2024-07-24 15:52:15.409596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:53.998 [2024-07-24 15:52:15.409608] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:53.998 [2024-07-24 15:52:15.409622] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:53.998 [2024-07-24 15:52:15.409634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:53.998 [2024-07-24 15:52:15.409648] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:53.998 [2024-07-24 15:52:15.409659] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:53.998 [2024-07-24 15:52:15.409672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:53.998 [2024-07-24 15:52:15.409684] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:53.998 [2024-07-24 15:52:15.409697] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:53.998 [2024-07-24 15:52:15.409709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:53.998 [2024-07-24 15:52:15.409722] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:53.998 [2024-07-24 15:52:15.409734] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:53.998 [2024-07-24 15:52:15.409748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:53.998 [2024-07-24 15:52:15.409759] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:53.998 [2024-07-24 15:52:15.409775] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:53.998 [2024-07-24 15:52:15.409786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:53.998 [2024-07-24 15:52:15.409800] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:53.998 [2024-07-24 15:52:15.409811] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:53.998 [2024-07-24 15:52:15.409824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:53.998 [2024-07-24 15:52:15.409836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:53.998 [2024-07-24 15:52:15.409851] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:53.998 [2024-07-24 15:52:15.409863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:53.998 [2024-07-24 15:52:15.409876] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:53.998 [2024-07-24 15:52:15.409889] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:53.998 [2024-07-24 15:52:15.409903] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:53.998 [2024-07-24 15:52:15.409915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:53.998 [2024-07-24 15:52:15.409930] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:53.998 [2024-07-24 15:52:15.409942] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:53.998 [2024-07-24 15:52:15.409955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:53.998 [2024-07-24 15:52:15.409967] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:53.998 [2024-07-24 15:52:15.409982] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:53.998 [2024-07-24 15:52:15.409994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:53.998 [2024-07-24 15:52:15.410011] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:53.998 [2024-07-24 15:52:15.410025] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:53.998 [2024-07-24 15:52:15.410045] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:53.998 [2024-07-24 15:52:15.410057] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:53.998 [2024-07-24 15:52:15.410071] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:53.998 [2024-07-24 15:52:15.410096] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:53.998 [2024-07-24 15:52:15.410114] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:53.998 [2024-07-24 15:52:15.410126] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:53.998 [2024-07-24 15:52:15.410140] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:53.998 [2024-07-24 15:52:15.410152] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:53.998 [2024-07-24 15:52:15.410166] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:53.998 [2024-07-24 15:52:15.410179] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:53.998 [2024-07-24 15:52:15.410193] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:53.998 [2024-07-24 15:52:15.410205] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:53.998 [2024-07-24 15:52:15.410224] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:53.998 [2024-07-24 15:52:15.410236] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:53.998 [2024-07-24 15:52:15.410252] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:53.999 [2024-07-24 15:52:15.410265] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:53.999 [2024-07-24 15:52:15.410279] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:53.999 [2024-07-24 15:52:15.410291] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:53.999 [2024-07-24 15:52:15.410305] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:53.999 [2024-07-24 15:52:15.410318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.999 [2024-07-24 15:52:15.410332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:53.999 [2024-07-24 15:52:15.410345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.027 ms 00:26:53.999 [2024-07-24 15:52:15.410359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.999 [2024-07-24 15:52:15.428139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.999 [2024-07-24 15:52:15.428196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:53.999 [2024-07-24 15:52:15.428217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.722 ms 00:26:53.999 [2024-07-24 15:52:15.428232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.999 [2024-07-24 15:52:15.428301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.999 [2024-07-24 15:52:15.428321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:53.999 [2024-07-24 15:52:15.428335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:53.999 [2024-07-24 15:52:15.428349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.999 [2024-07-24 15:52:15.466791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.999 [2024-07-24 15:52:15.466852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:53.999 [2024-07-24 15:52:15.466873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 38.362 ms 00:26:53.999 [2024-07-24 15:52:15.466889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.999 [2024-07-24 15:52:15.466946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.999 [2024-07-24 15:52:15.466967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:53.999 [2024-07-24 15:52:15.466981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:53.999 [2024-07-24 15:52:15.466995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.999 [2024-07-24 15:52:15.467395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.999 [2024-07-24 15:52:15.467422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:53.999 [2024-07-24 15:52:15.467436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.322 ms 00:26:53.999 [2024-07-24 15:52:15.467451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.999 [2024-07-24 15:52:15.467501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.999 [2024-07-24 15:52:15.467531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:53.999 [2024-07-24 15:52:15.467544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:53.999 [2024-07-24 15:52:15.467558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.999 [2024-07-24 15:52:15.487347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.999 [2024-07-24 15:52:15.487435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:53.999 [2024-07-24 15:52:15.487463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 19.756 ms 00:26:53.999 [2024-07-24 15:52:15.487484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.999 [2024-07-24 15:52:15.502933] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:53.999 [2024-07-24 15:52:15.504122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.999 [2024-07-24 15:52:15.504171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:53.999 [2024-07-24 15:52:15.504194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.451 ms 00:26:53.999 [2024-07-24 15:52:15.504208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.999 [2024-07-24 15:52:15.530490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.999 [2024-07-24 15:52:15.530579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:53.999 [2024-07-24 15:52:15.530612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.206 ms 00:26:53.999 [2024-07-24 15:52:15.530630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.999 [2024-07-24 15:52:15.530748] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:26:53.999 [2024-07-24 15:52:15.530776] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:26:56.528 [2024-07-24 15:52:17.572171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.528 [2024-07-24 15:52:17.572246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:56.528 [2024-07-24 15:52:17.572273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2041.433 ms 00:26:56.528 [2024-07-24 15:52:17.572287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.528 [2024-07-24 15:52:17.572409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.528 [2024-07-24 15:52:17.572429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:56.528 [2024-07-24 15:52:17.572445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:26:56.528 [2024-07-24 15:52:17.572458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.528 [2024-07-24 15:52:17.603197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.528 [2024-07-24 15:52:17.603248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:56.528 [2024-07-24 15:52:17.603272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.657 ms 00:26:56.528 [2024-07-24 15:52:17.603285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.528 [2024-07-24 15:52:17.634140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.528 [2024-07-24 15:52:17.634188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:56.528 [2024-07-24 15:52:17.634212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.796 ms 00:26:56.528 [2024-07-24 15:52:17.634225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.528 [2024-07-24 15:52:17.634635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.528 [2024-07-24 15:52:17.634672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:56.528 [2024-07-24 15:52:17.634692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.359 ms 00:26:56.528 [2024-07-24 15:52:17.634704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.528 [2024-07-24 15:52:17.715152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.528 [2024-07-24 15:52:17.715217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:56.528 [2024-07-24 15:52:17.715243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 80.375 ms 00:26:56.528 [2024-07-24 15:52:17.715257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.528 [2024-07-24 15:52:17.747528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.528 [2024-07-24 15:52:17.747587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:56.528 [2024-07-24 15:52:17.747612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 32.205 ms 00:26:56.528 [2024-07-24 15:52:17.747629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.528 [2024-07-24 15:52:17.749555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.528 [2024-07-24 15:52:17.749593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:56.528 [2024-07-24 15:52:17.749617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.868 ms 00:26:56.528 [2024-07-24 15:52:17.749630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.528 [2024-07-24 15:52:17.782054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.528 [2024-07-24 15:52:17.782149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:56.528 [2024-07-24 15:52:17.782190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 32.339 ms 00:26:56.528 [2024-07-24 15:52:17.782213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.528 [2024-07-24 15:52:17.782307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.528 [2024-07-24 15:52:17.782337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:56.528 [2024-07-24 15:52:17.782364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:56.528 [2024-07-24 15:52:17.782384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.528 [2024-07-24 15:52:17.782551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.528 [2024-07-24 15:52:17.782580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:56.528 [2024-07-24 15:52:17.782610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:26:56.528 [2024-07-24 15:52:17.782629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.528 [2024-07-24 15:52:17.784388] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2388.542 ms, result 0 00:26:56.528 { 00:26:56.528 "name": "ftl", 00:26:56.528 "uuid": "6745893d-90a2-4d28-96e3-df48f8e2d21c" 00:26:56.528 } 00:26:56.528 15:52:17 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:56.528 [2024-07-24 15:52:18.054931] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.528 15:52:18 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:56.787 15:52:18 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:57.045 [2024-07-24 15:52:18.587658] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:57.045 15:52:18 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:57.303 [2024-07-24 15:52:18.869307] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:57.303 15:52:18 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:57.870 Fill FTL, iteration 1 00:26:57.870 15:52:19 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:57.870 15:52:19 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:57.870 15:52:19 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:57.870 15:52:19 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:57.870 15:52:19 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:57.870 15:52:19 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:57.870 15:52:19 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:57.870 15:52:19 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:57.870 15:52:19 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:57.870 15:52:19 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:57.870 15:52:19 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:57.870 15:52:19 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:57.870 15:52:19 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:57.870 15:52:19 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:57.870 15:52:19 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:57.870 15:52:19 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:57.870 15:52:19 -- ftl/common.sh@163 -- # spdk_ini_pid=78243 00:26:57.870 15:52:19 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:57.870 15:52:19 -- ftl/common.sh@165 -- # waitforlisten 78243 /var/tmp/spdk.tgt.sock 00:26:57.870 15:52:19 -- common/autotest_common.sh@819 -- # '[' -z 78243 ']' 00:26:57.870 15:52:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:57.870 15:52:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:57.870 15:52:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:57.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:57.870 15:52:19 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:57.870 15:52:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:57.870 15:52:19 -- common/autotest_common.sh@10 -- # set +x 00:26:57.870 [2024-07-24 15:52:19.376889] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:57.870 [2024-07-24 15:52:19.377242] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78243 ] 00:26:58.129 [2024-07-24 15:52:19.542528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.387 [2024-07-24 15:52:19.787370] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:58.387 [2024-07-24 15:52:19.787874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.763 15:52:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:59.763 15:52:21 -- common/autotest_common.sh@852 -- # return 0 00:26:59.763 15:52:21 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:00.022 ftln1 00:27:00.022 15:52:21 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:00.022 15:52:21 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:00.280 15:52:21 -- ftl/common.sh@173 -- # echo ']}' 00:27:00.280 15:52:21 -- ftl/common.sh@176 -- # killprocess 78243 00:27:00.280 15:52:21 -- common/autotest_common.sh@926 -- # '[' -z 78243 ']' 00:27:00.280 15:52:21 -- common/autotest_common.sh@930 -- # kill -0 78243 00:27:00.280 15:52:21 -- common/autotest_common.sh@931 -- # uname 00:27:00.280 15:52:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:00.280 15:52:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78243 00:27:00.280 killing process with pid 78243 00:27:00.280 15:52:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:00.280 15:52:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:00.280 15:52:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78243' 00:27:00.280 15:52:21 -- common/autotest_common.sh@945 -- # kill 78243 00:27:00.280 15:52:21 -- common/autotest_common.sh@950 -- # wait 78243 00:27:02.813 15:52:23 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:02.813 15:52:23 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:02.813 [2024-07-24 15:52:23.861667] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:02.813 [2024-07-24 15:52:23.861810] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78308 ] 00:27:02.813 [2024-07-24 15:52:24.023066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.813 [2024-07-24 15:52:24.213577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.302  Copying: 211/1024 [MB] (211 MBps) Copying: 411/1024 [MB] (200 MBps) Copying: 616/1024 [MB] (205 MBps) Copying: 819/1024 [MB] (203 MBps) Copying: 1024/1024 [MB] (205 MBps) Copying: 1024/1024 [MB] (average 204 MBps) 00:27:09.302 00:27:09.302 Calculate MD5 checksum, iteration 1 00:27:09.302 15:52:30 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:09.302 15:52:30 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:09.302 15:52:30 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:09.302 15:52:30 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:09.302 15:52:30 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:09.302 15:52:30 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:09.302 15:52:30 -- ftl/common.sh@154 -- # return 0 00:27:09.302 15:52:30 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:09.561 [2024-07-24 15:52:30.983383] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:09.561 [2024-07-24 15:52:30.983538] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78378 ] 00:27:09.561 [2024-07-24 15:52:31.157481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.818 [2024-07-24 15:52:31.394182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.493  Copying: 503/1024 [MB] (503 MBps) Copying: 930/1024 [MB] (427 MBps) Copying: 1024/1024 [MB] (average 460 MBps) 00:27:13.493 00:27:13.493 15:52:35 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:13.493 15:52:35 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:16.059 15:52:37 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:16.059 Fill FTL, iteration 2 00:27:16.059 15:52:37 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=33b3104801e4d1f63a22012ec88bd2e8 00:27:16.059 15:52:37 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:16.059 15:52:37 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:16.059 15:52:37 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:16.059 15:52:37 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:16.059 15:52:37 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:16.059 15:52:37 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:16.059 15:52:37 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:16.059 15:52:37 -- ftl/common.sh@154 -- # return 0 00:27:16.059 15:52:37 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:16.059 [2024-07-24 15:52:37.371830] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:16.059 [2024-07-24 15:52:37.372225] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78447 ] 00:27:16.059 [2024-07-24 15:52:37.532022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.317 [2024-07-24 15:52:37.750164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.904  Copying: 211/1024 [MB] (211 MBps) Copying: 418/1024 [MB] (207 MBps) Copying: 615/1024 [MB] (197 MBps) Copying: 824/1024 [MB] (209 MBps) Copying: 1024/1024 [MB] (average 206 MBps) 00:27:22.904 00:27:22.904 15:52:44 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:22.904 15:52:44 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:22.904 Calculate MD5 checksum, iteration 2 00:27:22.904 15:52:44 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:22.904 15:52:44 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:22.905 15:52:44 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:22.905 15:52:44 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:22.905 15:52:44 -- ftl/common.sh@154 -- # return 0 00:27:22.905 15:52:44 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:22.905 [2024-07-24 15:52:44.321919] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:22.905 [2024-07-24 15:52:44.322069] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78523 ] 00:27:22.905 [2024-07-24 15:52:44.483850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.162 [2024-07-24 15:52:44.670072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.385  Copying: 476/1024 [MB] (476 MBps) Copying: 958/1024 [MB] (482 MBps) Copying: 1024/1024 [MB] (average 480 MBps) 00:27:27.385 00:27:27.385 15:52:48 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:27.385 15:52:48 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:29.915 15:52:51 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:29.915 15:52:51 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=5a126dd73759ae916f97deeec9868d79 00:27:29.916 15:52:51 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:29.916 15:52:51 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:29.916 15:52:51 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:29.916 [2024-07-24 15:52:51.325874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.916 [2024-07-24 15:52:51.325944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:29.916 [2024-07-24 15:52:51.325967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:29.916 [2024-07-24 15:52:51.325980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.916 [2024-07-24 15:52:51.326020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.916 [2024-07-24 15:52:51.326036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:29.916 [2024-07-24 15:52:51.326059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:29.916 [2024-07-24 15:52:51.326072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.916 [2024-07-24 15:52:51.326132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.916 [2024-07-24 15:52:51.326148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:29.916 [2024-07-24 15:52:51.326161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:29.916 [2024-07-24 15:52:51.326173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.916 [2024-07-24 15:52:51.326252] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.375 ms, result 0 00:27:29.916 true 00:27:29.916 15:52:51 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:30.174 { 00:27:30.174 "name": "ftl", 00:27:30.174 "properties": [ 00:27:30.174 { 00:27:30.174 "name": "superblock_version", 00:27:30.174 "value": 5, 00:27:30.174 "read-only": true 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "name": "base_device", 00:27:30.174 "bands": [ 00:27:30.174 { 00:27:30.174 "id": 0, 00:27:30.174 "state": "FREE", 00:27:30.174 "validity": 0.0 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 1, 00:27:30.174 "state": "FREE", 00:27:30.174 "validity": 0.0 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 2, 00:27:30.174 "state": "FREE", 00:27:30.174 "validity": 0.0 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 3, 00:27:30.174 "state": "FREE", 00:27:30.174 "validity": 0.0 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 4, 00:27:30.174 "state": "FREE", 00:27:30.174 "validity": 0.0 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 5, 00:27:30.174 "state": "FREE", 00:27:30.174 "validity": 0.0 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 6, 00:27:30.174 "state": "FREE", 00:27:30.174 "validity": 0.0 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 7, 00:27:30.174 "state": "FREE", 00:27:30.174 "validity": 0.0 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 8, 00:27:30.174 "state": "FREE", 00:27:30.174 "validity": 0.0 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 9, 00:27:30.174 "state": "FREE", 00:27:30.174 "validity": 0.0 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 10, 00:27:30.174 "state": "FREE", 00:27:30.174 "validity": 0.0 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 11, 00:27:30.174 "state": "FREE", 00:27:30.174 "validity": 0.0 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 12, 00:27:30.174 "state": "FREE", 00:27:30.174 "validity": 0.0 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 13, 00:27:30.174 "state": "FREE", 00:27:30.174 "validity": 0.0 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 14, 00:27:30.174 "state": "FREE", 00:27:30.174 "validity": 0.0 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 15, 00:27:30.174 "state": "FREE", 00:27:30.174 "validity": 0.0 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 16, 00:27:30.174 "state": "FREE", 00:27:30.174 "validity": 0.0 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 17, 00:27:30.174 "state": "FREE", 00:27:30.174 "validity": 0.0 00:27:30.174 } 00:27:30.174 ], 00:27:30.174 "read-only": true 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "name": "cache_device", 00:27:30.174 "type": "bdev", 00:27:30.174 "chunks": [ 00:27:30.174 { 00:27:30.174 "id": 0, 00:27:30.174 "state": "CLOSED", 00:27:30.174 "utilization": 1.0 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 1, 00:27:30.174 "state": "CLOSED", 00:27:30.174 "utilization": 1.0 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 2, 00:27:30.174 "state": "OPEN", 00:27:30.174 "utilization": 0.001953125 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "id": 3, 00:27:30.174 "state": "OPEN", 00:27:30.174 "utilization": 0.0 00:27:30.174 } 00:27:30.174 ], 00:27:30.174 "read-only": true 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "name": "verbose_mode", 00:27:30.174 "value": true, 00:27:30.174 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:30.174 }, 00:27:30.174 { 00:27:30.174 "name": "prep_upgrade_on_shutdown", 00:27:30.174 "value": false, 00:27:30.174 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:30.174 } 00:27:30.174 ] 00:27:30.174 } 00:27:30.175 15:52:51 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:30.433 [2024-07-24 15:52:51.782482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.433 [2024-07-24 15:52:51.782552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:30.433 [2024-07-24 15:52:51.782576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:30.433 [2024-07-24 15:52:51.782588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.433 [2024-07-24 15:52:51.782627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.433 [2024-07-24 15:52:51.782642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:30.433 [2024-07-24 15:52:51.782668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:30.433 [2024-07-24 15:52:51.782680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.433 [2024-07-24 15:52:51.782709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.433 [2024-07-24 15:52:51.782723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:30.433 [2024-07-24 15:52:51.782735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:30.433 [2024-07-24 15:52:51.782747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.433 [2024-07-24 15:52:51.782867] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.376 ms, result 0 00:27:30.433 true 00:27:30.433 15:52:51 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:30.433 15:52:51 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:30.433 15:52:51 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:30.433 15:52:52 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:30.433 15:52:52 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:30.433 15:52:52 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:30.691 [2024-07-24 15:52:52.239023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.691 [2024-07-24 15:52:52.239113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:30.691 [2024-07-24 15:52:52.239137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:30.691 [2024-07-24 15:52:52.239150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.691 [2024-07-24 15:52:52.239189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.691 [2024-07-24 15:52:52.239204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:30.691 [2024-07-24 15:52:52.239217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:30.691 [2024-07-24 15:52:52.239229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.691 [2024-07-24 15:52:52.239257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.691 [2024-07-24 15:52:52.239269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:30.691 [2024-07-24 15:52:52.239281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:30.691 [2024-07-24 15:52:52.239293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.691 [2024-07-24 15:52:52.239371] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.335 ms, result 0 00:27:30.691 true 00:27:30.691 15:52:52 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:30.950 { 00:27:30.950 "name": "ftl", 00:27:30.950 "properties": [ 00:27:30.950 { 00:27:30.950 "name": "superblock_version", 00:27:30.950 "value": 5, 00:27:30.950 "read-only": true 00:27:30.950 }, 00:27:30.950 { 00:27:30.950 "name": "base_device", 00:27:30.950 "bands": [ 00:27:30.950 { 00:27:30.950 "id": 0, 00:27:30.950 "state": "FREE", 00:27:30.950 "validity": 0.0 00:27:30.950 }, 00:27:30.950 { 00:27:30.950 "id": 1, 00:27:30.950 "state": "FREE", 00:27:30.950 "validity": 0.0 00:27:30.950 }, 00:27:30.950 { 00:27:30.950 "id": 2, 00:27:30.950 "state": "FREE", 00:27:30.950 "validity": 0.0 00:27:30.950 }, 00:27:30.950 { 00:27:30.950 "id": 3, 00:27:30.950 "state": "FREE", 00:27:30.950 "validity": 0.0 00:27:30.950 }, 00:27:30.950 { 00:27:30.950 "id": 4, 00:27:30.950 "state": "FREE", 00:27:30.950 "validity": 0.0 00:27:30.950 }, 00:27:30.950 { 00:27:30.950 "id": 5, 00:27:30.950 "state": "FREE", 00:27:30.950 "validity": 0.0 00:27:30.950 }, 00:27:30.950 { 00:27:30.950 "id": 6, 00:27:30.950 "state": "FREE", 00:27:30.950 "validity": 0.0 00:27:30.950 }, 00:27:30.950 { 00:27:30.950 "id": 7, 00:27:30.950 "state": "FREE", 00:27:30.950 "validity": 0.0 00:27:30.950 }, 00:27:30.950 { 00:27:30.950 "id": 8, 00:27:30.951 "state": "FREE", 00:27:30.951 "validity": 0.0 00:27:30.951 }, 00:27:30.951 { 00:27:30.951 "id": 9, 00:27:30.951 "state": "FREE", 00:27:30.951 "validity": 0.0 00:27:30.951 }, 00:27:30.951 { 00:27:30.951 "id": 10, 00:27:30.951 "state": "FREE", 00:27:30.951 "validity": 0.0 00:27:30.951 }, 00:27:30.951 { 00:27:30.951 "id": 11, 00:27:30.951 "state": "FREE", 00:27:30.951 "validity": 0.0 00:27:30.951 }, 00:27:30.951 { 00:27:30.951 "id": 12, 00:27:30.951 "state": "FREE", 00:27:30.951 "validity": 0.0 00:27:30.951 }, 00:27:30.951 { 00:27:30.951 "id": 13, 00:27:30.951 "state": "FREE", 00:27:30.951 "validity": 0.0 00:27:30.951 }, 00:27:30.951 { 00:27:30.951 "id": 14, 00:27:30.951 "state": "FREE", 00:27:30.951 "validity": 0.0 00:27:30.951 }, 00:27:30.951 { 00:27:30.951 "id": 15, 00:27:30.951 "state": "FREE", 00:27:30.951 "validity": 0.0 00:27:30.951 }, 00:27:30.951 { 00:27:30.951 "id": 16, 00:27:30.951 "state": "FREE", 00:27:30.951 "validity": 0.0 00:27:30.951 }, 00:27:30.951 { 00:27:30.951 "id": 17, 00:27:30.951 "state": "FREE", 00:27:30.951 "validity": 0.0 00:27:30.951 } 00:27:30.951 ], 00:27:30.951 "read-only": true 00:27:30.951 }, 00:27:30.951 { 00:27:30.951 "name": "cache_device", 00:27:30.951 "type": "bdev", 00:27:30.951 "chunks": [ 00:27:30.951 { 00:27:30.951 "id": 0, 00:27:30.951 "state": "CLOSED", 00:27:30.951 "utilization": 1.0 00:27:30.951 }, 00:27:30.951 { 00:27:30.951 "id": 1, 00:27:30.951 "state": "CLOSED", 00:27:30.951 "utilization": 1.0 00:27:30.951 }, 00:27:30.951 { 00:27:30.951 "id": 2, 00:27:30.951 "state": "OPEN", 00:27:30.951 "utilization": 0.001953125 00:27:30.951 }, 00:27:30.951 { 00:27:30.951 "id": 3, 00:27:30.951 "state": "OPEN", 00:27:30.951 "utilization": 0.0 00:27:30.951 } 00:27:30.951 ], 00:27:30.951 "read-only": true 00:27:30.951 }, 00:27:30.951 { 00:27:30.951 "name": "verbose_mode", 00:27:30.951 "value": true, 00:27:30.951 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:30.951 }, 00:27:30.951 { 00:27:30.951 "name": "prep_upgrade_on_shutdown", 00:27:30.951 "value": true, 00:27:30.951 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:30.951 } 00:27:30.951 ] 00:27:30.951 } 00:27:30.951 15:52:52 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:30.951 15:52:52 -- ftl/common.sh@130 -- # [[ -n 78123 ]] 00:27:30.951 15:52:52 -- ftl/common.sh@131 -- # killprocess 78123 00:27:30.951 15:52:52 -- common/autotest_common.sh@926 -- # '[' -z 78123 ']' 00:27:30.951 15:52:52 -- common/autotest_common.sh@930 -- # kill -0 78123 00:27:30.951 15:52:52 -- common/autotest_common.sh@931 -- # uname 00:27:30.951 15:52:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:30.951 15:52:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78123 00:27:30.951 15:52:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:30.951 15:52:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:30.951 15:52:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78123' 00:27:30.951 killing process with pid 78123 00:27:30.951 15:52:52 -- common/autotest_common.sh@945 -- # kill 78123 00:27:30.951 15:52:52 -- common/autotest_common.sh@950 -- # wait 78123 00:27:31.885 [2024-07-24 15:52:53.442285] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:31.885 [2024-07-24 15:52:53.460669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.885 [2024-07-24 15:52:53.460739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:31.885 [2024-07-24 15:52:53.460760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:31.885 [2024-07-24 15:52:53.460774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.885 [2024-07-24 15:52:53.460814] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:31.885 [2024-07-24 15:52:53.464158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.885 [2024-07-24 15:52:53.464193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:31.885 [2024-07-24 15:52:53.464210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.321 ms 00:27:31.885 [2024-07-24 15:52:53.464223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.869 [2024-07-24 15:53:01.919336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.869 [2024-07-24 15:53:01.919421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:41.870 [2024-07-24 15:53:01.919445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8455.128 ms 00:27:41.870 [2024-07-24 15:53:01.919458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.870 [2024-07-24 15:53:01.920689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.870 [2024-07-24 15:53:01.920722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:41.870 [2024-07-24 15:53:01.920739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.205 ms 00:27:41.870 [2024-07-24 15:53:01.920751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.870 [2024-07-24 15:53:01.922001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.870 [2024-07-24 15:53:01.922035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:41.870 [2024-07-24 15:53:01.922050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.200 ms 00:27:41.870 [2024-07-24 15:53:01.922062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.870 [2024-07-24 15:53:01.934657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.870 [2024-07-24 15:53:01.934730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:41.870 [2024-07-24 15:53:01.934751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.513 ms 00:27:41.870 [2024-07-24 15:53:01.934764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.870 [2024-07-24 15:53:01.942555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.870 [2024-07-24 15:53:01.942624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:41.870 [2024-07-24 15:53:01.942643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.713 ms 00:27:41.870 [2024-07-24 15:53:01.942676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.870 [2024-07-24 15:53:01.942817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.870 [2024-07-24 15:53:01.942838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:41.870 [2024-07-24 15:53:01.942852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.092 ms 00:27:41.870 [2024-07-24 15:53:01.942863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.870 [2024-07-24 15:53:01.955415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.870 [2024-07-24 15:53:01.955492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:41.870 [2024-07-24 15:53:01.955514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.522 ms 00:27:41.870 [2024-07-24 15:53:01.955526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.870 [2024-07-24 15:53:01.968145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.870 [2024-07-24 15:53:01.968215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:41.870 [2024-07-24 15:53:01.968236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.557 ms 00:27:41.871 [2024-07-24 15:53:01.968249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.871 [2024-07-24 15:53:01.980649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.871 [2024-07-24 15:53:01.980720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:41.871 [2024-07-24 15:53:01.980740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.345 ms 00:27:41.871 [2024-07-24 15:53:01.980752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.871 [2024-07-24 15:53:01.993245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.871 [2024-07-24 15:53:01.993308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:41.871 [2024-07-24 15:53:01.993327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.369 ms 00:27:41.871 [2024-07-24 15:53:01.993339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.871 [2024-07-24 15:53:01.993386] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:41.871 [2024-07-24 15:53:01.993412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:41.871 [2024-07-24 15:53:01.993427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:41.871 [2024-07-24 15:53:01.993440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:41.871 [2024-07-24 15:53:01.993452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:41.871 [2024-07-24 15:53:01.993465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:41.871 [2024-07-24 15:53:01.993477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:41.871 [2024-07-24 15:53:01.993489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:41.871 [2024-07-24 15:53:01.993501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:41.871 [2024-07-24 15:53:01.993514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:41.871 [2024-07-24 15:53:01.993526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:41.871 [2024-07-24 15:53:01.993539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:41.871 [2024-07-24 15:53:01.993551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:41.871 [2024-07-24 15:53:01.993563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:41.872 [2024-07-24 15:53:01.993575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:41.872 [2024-07-24 15:53:01.993587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:41.872 [2024-07-24 15:53:01.993598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:41.872 [2024-07-24 15:53:01.993610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:41.872 [2024-07-24 15:53:01.993623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:41.872 [2024-07-24 15:53:01.993637] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:41.872 [2024-07-24 15:53:01.993649] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 6745893d-90a2-4d28-96e3-df48f8e2d21c 00:27:41.872 [2024-07-24 15:53:01.993682] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:41.872 [2024-07-24 15:53:01.993698] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:41.872 [2024-07-24 15:53:01.993710] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:41.872 [2024-07-24 15:53:01.993722] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:41.872 [2024-07-24 15:53:01.993734] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:41.872 [2024-07-24 15:53:01.993747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:41.872 [2024-07-24 15:53:01.993758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:41.872 [2024-07-24 15:53:01.993769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:41.872 [2024-07-24 15:53:01.993781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:41.872 [2024-07-24 15:53:01.993793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.872 [2024-07-24 15:53:01.993805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:41.872 [2024-07-24 15:53:01.993818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.409 ms 00:27:41.872 [2024-07-24 15:53:01.993829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.872 [2024-07-24 15:53:02.010842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.872 [2024-07-24 15:53:02.010913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:41.873 [2024-07-24 15:53:02.010934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.960 ms 00:27:41.873 [2024-07-24 15:53:02.010947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.873 [2024-07-24 15:53:02.011261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.873 [2024-07-24 15:53:02.011281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:41.873 [2024-07-24 15:53:02.011294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.239 ms 00:27:41.873 [2024-07-24 15:53:02.011314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.873 [2024-07-24 15:53:02.069436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.873 [2024-07-24 15:53:02.069506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:41.873 [2024-07-24 15:53:02.069526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.873 [2024-07-24 15:53:02.069539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.873 [2024-07-24 15:53:02.069602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.873 [2024-07-24 15:53:02.069617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:41.873 [2024-07-24 15:53:02.069630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.873 [2024-07-24 15:53:02.069642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.874 [2024-07-24 15:53:02.069766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.874 [2024-07-24 15:53:02.069786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:41.874 [2024-07-24 15:53:02.069799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.874 [2024-07-24 15:53:02.069811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.874 [2024-07-24 15:53:02.069836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.874 [2024-07-24 15:53:02.069850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:41.874 [2024-07-24 15:53:02.069862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.874 [2024-07-24 15:53:02.069874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.874 [2024-07-24 15:53:02.171859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.874 [2024-07-24 15:53:02.171932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:41.874 [2024-07-24 15:53:02.171960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.874 [2024-07-24 15:53:02.171973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.874 [2024-07-24 15:53:02.211628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.874 [2024-07-24 15:53:02.211699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:41.874 [2024-07-24 15:53:02.211719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.874 [2024-07-24 15:53:02.211731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.874 [2024-07-24 15:53:02.211833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.874 [2024-07-24 15:53:02.211862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:41.875 [2024-07-24 15:53:02.211876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.875 [2024-07-24 15:53:02.211887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.875 [2024-07-24 15:53:02.211946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.875 [2024-07-24 15:53:02.211962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:41.875 [2024-07-24 15:53:02.211974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.875 [2024-07-24 15:53:02.211985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.875 [2024-07-24 15:53:02.212137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.875 [2024-07-24 15:53:02.212163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:41.875 [2024-07-24 15:53:02.212177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.875 [2024-07-24 15:53:02.212189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.875 [2024-07-24 15:53:02.212242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.875 [2024-07-24 15:53:02.212260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:41.875 [2024-07-24 15:53:02.212273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.875 [2024-07-24 15:53:02.212285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.875 [2024-07-24 15:53:02.212339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.875 [2024-07-24 15:53:02.212355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:41.875 [2024-07-24 15:53:02.212373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.875 [2024-07-24 15:53:02.212385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.875 [2024-07-24 15:53:02.212441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.875 [2024-07-24 15:53:02.212457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:41.875 [2024-07-24 15:53:02.212469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.875 [2024-07-24 15:53:02.212480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.875 [2024-07-24 15:53:02.212628] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8751.975 ms, result 0 00:27:45.164 15:53:06 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:45.164 15:53:06 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:45.164 15:53:06 -- ftl/common.sh@81 -- # local base_bdev= 00:27:45.164 15:53:06 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:45.164 15:53:06 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:45.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.164 15:53:06 -- ftl/common.sh@89 -- # spdk_tgt_pid=78755 00:27:45.164 15:53:06 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:45.164 15:53:06 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:45.164 15:53:06 -- ftl/common.sh@91 -- # waitforlisten 78755 00:27:45.164 15:53:06 -- common/autotest_common.sh@819 -- # '[' -z 78755 ']' 00:27:45.164 15:53:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.164 15:53:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:45.164 15:53:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.164 15:53:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:45.164 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:27:45.164 [2024-07-24 15:53:06.184743] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:45.164 [2024-07-24 15:53:06.184885] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78755 ] 00:27:45.164 [2024-07-24 15:53:06.346670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.164 [2024-07-24 15:53:06.589424] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:45.164 [2024-07-24 15:53:06.589702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.102 [2024-07-24 15:53:07.388189] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:46.102 [2024-07-24 15:53:07.388280] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:46.103 [2024-07-24 15:53:07.530050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.103 [2024-07-24 15:53:07.530142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:46.103 [2024-07-24 15:53:07.530182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:46.103 [2024-07-24 15:53:07.530203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.103 [2024-07-24 15:53:07.530336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.103 [2024-07-24 15:53:07.530378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:46.103 [2024-07-24 15:53:07.530401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:27:46.103 [2024-07-24 15:53:07.530429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.103 [2024-07-24 15:53:07.530489] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:46.103 [2024-07-24 15:53:07.531624] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:46.103 [2024-07-24 15:53:07.531674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.103 [2024-07-24 15:53:07.531707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:46.103 [2024-07-24 15:53:07.531730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.195 ms 00:27:46.103 [2024-07-24 15:53:07.531749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.103 [2024-07-24 15:53:07.533065] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:46.103 [2024-07-24 15:53:07.549599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.103 [2024-07-24 15:53:07.549683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:46.103 [2024-07-24 15:53:07.549723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.530 ms 00:27:46.103 [2024-07-24 15:53:07.549744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.103 [2024-07-24 15:53:07.549901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.103 [2024-07-24 15:53:07.549942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:46.103 [2024-07-24 15:53:07.549972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:27:46.103 [2024-07-24 15:53:07.549994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.103 [2024-07-24 15:53:07.555153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.103 [2024-07-24 15:53:07.555221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:46.103 [2024-07-24 15:53:07.555251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.903 ms 00:27:46.103 [2024-07-24 15:53:07.555271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.103 [2024-07-24 15:53:07.555388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.103 [2024-07-24 15:53:07.555418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:46.103 [2024-07-24 15:53:07.555443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:27:46.103 [2024-07-24 15:53:07.555463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.103 [2024-07-24 15:53:07.555588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.103 [2024-07-24 15:53:07.555616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:46.103 [2024-07-24 15:53:07.555637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:46.103 [2024-07-24 15:53:07.555656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.103 [2024-07-24 15:53:07.555724] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:46.103 [2024-07-24 15:53:07.560228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.103 [2024-07-24 15:53:07.560277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:46.103 [2024-07-24 15:53:07.560304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.526 ms 00:27:46.103 [2024-07-24 15:53:07.560326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.103 [2024-07-24 15:53:07.560414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.103 [2024-07-24 15:53:07.560441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:46.103 [2024-07-24 15:53:07.560465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:46.103 [2024-07-24 15:53:07.560484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.103 [2024-07-24 15:53:07.560602] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:46.103 [2024-07-24 15:53:07.560653] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:46.103 [2024-07-24 15:53:07.560716] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:46.103 [2024-07-24 15:53:07.560768] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:46.103 [2024-07-24 15:53:07.560884] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:46.103 [2024-07-24 15:53:07.560916] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:46.103 [2024-07-24 15:53:07.560941] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:46.103 [2024-07-24 15:53:07.560966] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:46.103 [2024-07-24 15:53:07.560990] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:46.103 [2024-07-24 15:53:07.561012] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:46.103 [2024-07-24 15:53:07.561030] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:46.103 [2024-07-24 15:53:07.561050] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:46.103 [2024-07-24 15:53:07.561077] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:46.103 [2024-07-24 15:53:07.561134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.103 [2024-07-24 15:53:07.561165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:46.103 [2024-07-24 15:53:07.561201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.537 ms 00:27:46.103 [2024-07-24 15:53:07.561223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.103 [2024-07-24 15:53:07.561343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.103 [2024-07-24 15:53:07.561370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:46.103 [2024-07-24 15:53:07.561391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:27:46.103 [2024-07-24 15:53:07.561410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.103 [2024-07-24 15:53:07.561542] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:46.103 [2024-07-24 15:53:07.561571] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:46.103 [2024-07-24 15:53:07.561601] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:46.103 [2024-07-24 15:53:07.561622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.103 [2024-07-24 15:53:07.561643] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:46.103 [2024-07-24 15:53:07.561661] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:46.103 [2024-07-24 15:53:07.561681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:46.103 [2024-07-24 15:53:07.561699] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:46.103 [2024-07-24 15:53:07.561717] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:46.103 [2024-07-24 15:53:07.561735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.103 [2024-07-24 15:53:07.561753] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:46.103 [2024-07-24 15:53:07.561772] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:46.103 [2024-07-24 15:53:07.561791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.103 [2024-07-24 15:53:07.561809] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:46.103 [2024-07-24 15:53:07.561827] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:46.103 [2024-07-24 15:53:07.561845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.103 [2024-07-24 15:53:07.561864] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:46.103 [2024-07-24 15:53:07.561882] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:46.103 [2024-07-24 15:53:07.561901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.103 [2024-07-24 15:53:07.561918] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:46.103 [2024-07-24 15:53:07.561936] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:46.103 [2024-07-24 15:53:07.561955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:46.103 [2024-07-24 15:53:07.561974] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:46.103 [2024-07-24 15:53:07.561992] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:46.103 [2024-07-24 15:53:07.562011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:46.103 [2024-07-24 15:53:07.562029] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:46.103 [2024-07-24 15:53:07.562046] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:46.103 [2024-07-24 15:53:07.562065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:46.103 [2024-07-24 15:53:07.562082] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:46.103 [2024-07-24 15:53:07.562122] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:46.103 [2024-07-24 15:53:07.562142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:46.103 [2024-07-24 15:53:07.562162] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:46.103 [2024-07-24 15:53:07.562185] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:46.103 [2024-07-24 15:53:07.562203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:46.103 [2024-07-24 15:53:07.562221] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:46.103 [2024-07-24 15:53:07.562241] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:46.103 [2024-07-24 15:53:07.562259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.103 [2024-07-24 15:53:07.562277] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:46.103 [2024-07-24 15:53:07.562296] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:46.103 [2024-07-24 15:53:07.562313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.103 [2024-07-24 15:53:07.562330] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:46.104 [2024-07-24 15:53:07.562349] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:46.104 [2024-07-24 15:53:07.562368] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:46.104 [2024-07-24 15:53:07.562395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.104 [2024-07-24 15:53:07.562423] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:46.104 [2024-07-24 15:53:07.562463] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:46.104 [2024-07-24 15:53:07.562497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:46.104 [2024-07-24 15:53:07.562522] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:46.104 [2024-07-24 15:53:07.562540] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:46.104 [2024-07-24 15:53:07.562560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:46.104 [2024-07-24 15:53:07.562581] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:46.104 [2024-07-24 15:53:07.562604] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:46.104 [2024-07-24 15:53:07.562627] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:46.104 [2024-07-24 15:53:07.562662] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:46.104 [2024-07-24 15:53:07.562687] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:46.104 [2024-07-24 15:53:07.562707] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:46.104 [2024-07-24 15:53:07.562727] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:46.104 [2024-07-24 15:53:07.562748] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:46.104 [2024-07-24 15:53:07.562767] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:46.104 [2024-07-24 15:53:07.562797] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:46.104 [2024-07-24 15:53:07.562839] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:46.104 [2024-07-24 15:53:07.562871] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:46.104 [2024-07-24 15:53:07.562952] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:46.104 [2024-07-24 15:53:07.562976] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:46.104 [2024-07-24 15:53:07.562998] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:46.104 [2024-07-24 15:53:07.563017] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:46.104 [2024-07-24 15:53:07.563040] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:46.104 [2024-07-24 15:53:07.563068] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:46.104 [2024-07-24 15:53:07.563103] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:46.104 [2024-07-24 15:53:07.563127] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:46.104 [2024-07-24 15:53:07.563147] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:46.104 [2024-07-24 15:53:07.563169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.104 [2024-07-24 15:53:07.563189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:46.104 [2024-07-24 15:53:07.563208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.683 ms 00:27:46.104 [2024-07-24 15:53:07.563227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.104 [2024-07-24 15:53:07.582205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.104 [2024-07-24 15:53:07.582274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:46.104 [2024-07-24 15:53:07.582306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.873 ms 00:27:46.104 [2024-07-24 15:53:07.582326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.104 [2024-07-24 15:53:07.582422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.104 [2024-07-24 15:53:07.582448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:46.104 [2024-07-24 15:53:07.582470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:46.104 [2024-07-24 15:53:07.582489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.104 [2024-07-24 15:53:07.621572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.104 [2024-07-24 15:53:07.621648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:46.104 [2024-07-24 15:53:07.621688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 38.956 ms 00:27:46.104 [2024-07-24 15:53:07.621708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.104 [2024-07-24 15:53:07.621824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.104 [2024-07-24 15:53:07.621850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:46.104 [2024-07-24 15:53:07.621872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:46.104 [2024-07-24 15:53:07.621891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.104 [2024-07-24 15:53:07.622411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.104 [2024-07-24 15:53:07.622449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:46.104 [2024-07-24 15:53:07.622476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.404 ms 00:27:46.104 [2024-07-24 15:53:07.622507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.104 [2024-07-24 15:53:07.622599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.104 [2024-07-24 15:53:07.622626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:46.104 [2024-07-24 15:53:07.622662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:27:46.104 [2024-07-24 15:53:07.622686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.104 [2024-07-24 15:53:07.641000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.104 [2024-07-24 15:53:07.641072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:46.104 [2024-07-24 15:53:07.641150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.260 ms 00:27:46.104 [2024-07-24 15:53:07.641171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.104 [2024-07-24 15:53:07.658163] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:46.104 [2024-07-24 15:53:07.658243] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:46.104 [2024-07-24 15:53:07.658276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.104 [2024-07-24 15:53:07.658296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:46.104 [2024-07-24 15:53:07.658318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.841 ms 00:27:46.104 [2024-07-24 15:53:07.658336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.104 [2024-07-24 15:53:07.677497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.104 [2024-07-24 15:53:07.677577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:46.104 [2024-07-24 15:53:07.677632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 19.043 ms 00:27:46.104 [2024-07-24 15:53:07.677653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.104 [2024-07-24 15:53:07.695130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.104 [2024-07-24 15:53:07.695213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:46.104 [2024-07-24 15:53:07.695253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.347 ms 00:27:46.104 [2024-07-24 15:53:07.695273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.363 [2024-07-24 15:53:07.711708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.363 [2024-07-24 15:53:07.711790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:46.363 [2024-07-24 15:53:07.711822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.324 ms 00:27:46.363 [2024-07-24 15:53:07.711843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.363 [2024-07-24 15:53:07.712551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.363 [2024-07-24 15:53:07.712596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:46.363 [2024-07-24 15:53:07.712624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.409 ms 00:27:46.363 [2024-07-24 15:53:07.712646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.363 [2024-07-24 15:53:07.792991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.363 [2024-07-24 15:53:07.793068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:46.363 [2024-07-24 15:53:07.793124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 80.296 ms 00:27:46.363 [2024-07-24 15:53:07.793147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.363 [2024-07-24 15:53:07.806635] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:46.363 [2024-07-24 15:53:07.807740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.363 [2024-07-24 15:53:07.807787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:46.363 [2024-07-24 15:53:07.807819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.480 ms 00:27:46.363 [2024-07-24 15:53:07.807843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.363 [2024-07-24 15:53:07.808034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.363 [2024-07-24 15:53:07.808066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:46.363 [2024-07-24 15:53:07.808111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:46.363 [2024-07-24 15:53:07.808136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.363 [2024-07-24 15:53:07.808259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.363 [2024-07-24 15:53:07.808288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:46.363 [2024-07-24 15:53:07.808312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:46.363 [2024-07-24 15:53:07.808333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.363 [2024-07-24 15:53:07.810366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.363 [2024-07-24 15:53:07.810416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:46.363 [2024-07-24 15:53:07.810442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.981 ms 00:27:46.363 [2024-07-24 15:53:07.810462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.363 [2024-07-24 15:53:07.810530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.363 [2024-07-24 15:53:07.810556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:46.363 [2024-07-24 15:53:07.810578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:46.363 [2024-07-24 15:53:07.810598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.364 [2024-07-24 15:53:07.810689] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:46.364 [2024-07-24 15:53:07.810721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.364 [2024-07-24 15:53:07.810742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:46.364 [2024-07-24 15:53:07.810769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:27:46.364 [2024-07-24 15:53:07.810790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.364 [2024-07-24 15:53:07.844259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.364 [2024-07-24 15:53:07.844338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:46.364 [2024-07-24 15:53:07.844377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 33.412 ms 00:27:46.364 [2024-07-24 15:53:07.844398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.364 [2024-07-24 15:53:07.844555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.364 [2024-07-24 15:53:07.844594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:46.364 [2024-07-24 15:53:07.844619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:27:46.364 [2024-07-24 15:53:07.844637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.364 [2024-07-24 15:53:07.846060] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 315.498 ms, result 0 00:27:46.364 [2024-07-24 15:53:07.860800] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.364 [2024-07-24 15:53:07.876884] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:46.364 [2024-07-24 15:53:07.886107] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:46.626 15:53:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:46.626 15:53:07 -- common/autotest_common.sh@852 -- # return 0 00:27:46.626 15:53:07 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:46.626 15:53:07 -- ftl/common.sh@95 -- # return 0 00:27:46.626 15:53:07 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:46.885 [2024-07-24 15:53:08.243549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.885 [2024-07-24 15:53:08.243629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:46.885 [2024-07-24 15:53:08.243662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:46.885 [2024-07-24 15:53:08.243683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.885 [2024-07-24 15:53:08.243743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.885 [2024-07-24 15:53:08.243769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:46.885 [2024-07-24 15:53:08.243788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:46.885 [2024-07-24 15:53:08.243805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.885 [2024-07-24 15:53:08.243858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.885 [2024-07-24 15:53:08.243881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:46.885 [2024-07-24 15:53:08.243911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:46.885 [2024-07-24 15:53:08.243931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.885 [2024-07-24 15:53:08.244051] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.471 ms, result 0 00:27:46.885 true 00:27:46.885 15:53:08 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:47.144 { 00:27:47.144 "name": "ftl", 00:27:47.144 "properties": [ 00:27:47.144 { 00:27:47.144 "name": "superblock_version", 00:27:47.144 "value": 5, 00:27:47.144 "read-only": true 00:27:47.144 }, 00:27:47.144 { 00:27:47.144 "name": "base_device", 00:27:47.144 "bands": [ 00:27:47.144 { 00:27:47.144 "id": 0, 00:27:47.144 "state": "CLOSED", 00:27:47.144 "validity": 1.0 00:27:47.144 }, 00:27:47.144 { 00:27:47.144 "id": 1, 00:27:47.144 "state": "CLOSED", 00:27:47.144 "validity": 1.0 00:27:47.144 }, 00:27:47.144 { 00:27:47.144 "id": 2, 00:27:47.144 "state": "CLOSED", 00:27:47.144 "validity": 0.007843137254901933 00:27:47.144 }, 00:27:47.144 { 00:27:47.144 "id": 3, 00:27:47.144 "state": "FREE", 00:27:47.144 "validity": 0.0 00:27:47.144 }, 00:27:47.144 { 00:27:47.144 "id": 4, 00:27:47.144 "state": "FREE", 00:27:47.144 "validity": 0.0 00:27:47.144 }, 00:27:47.144 { 00:27:47.144 "id": 5, 00:27:47.144 "state": "FREE", 00:27:47.144 "validity": 0.0 00:27:47.144 }, 00:27:47.144 { 00:27:47.144 "id": 6, 00:27:47.144 "state": "FREE", 00:27:47.144 "validity": 0.0 00:27:47.144 }, 00:27:47.144 { 00:27:47.144 "id": 7, 00:27:47.144 "state": "FREE", 00:27:47.144 "validity": 0.0 00:27:47.144 }, 00:27:47.144 { 00:27:47.144 "id": 8, 00:27:47.144 "state": "FREE", 00:27:47.144 "validity": 0.0 00:27:47.144 }, 00:27:47.144 { 00:27:47.144 "id": 9, 00:27:47.144 "state": "FREE", 00:27:47.144 "validity": 0.0 00:27:47.144 }, 00:27:47.144 { 00:27:47.144 "id": 10, 00:27:47.144 "state": "FREE", 00:27:47.144 "validity": 0.0 00:27:47.144 }, 00:27:47.144 { 00:27:47.144 "id": 11, 00:27:47.144 "state": "FREE", 00:27:47.144 "validity": 0.0 00:27:47.144 }, 00:27:47.144 { 00:27:47.144 "id": 12, 00:27:47.144 "state": "FREE", 00:27:47.144 "validity": 0.0 00:27:47.144 }, 00:27:47.144 { 00:27:47.144 "id": 13, 00:27:47.144 "state": "FREE", 00:27:47.144 "validity": 0.0 00:27:47.144 }, 00:27:47.144 { 00:27:47.144 "id": 14, 00:27:47.144 "state": "FREE", 00:27:47.144 "validity": 0.0 00:27:47.144 }, 00:27:47.144 { 00:27:47.144 "id": 15, 00:27:47.144 "state": "FREE", 00:27:47.144 "validity": 0.0 00:27:47.144 }, 00:27:47.144 { 00:27:47.144 "id": 16, 00:27:47.144 "state": "FREE", 00:27:47.144 "validity": 0.0 00:27:47.144 }, 00:27:47.144 { 00:27:47.144 "id": 17, 00:27:47.145 "state": "FREE", 00:27:47.145 "validity": 0.0 00:27:47.145 } 00:27:47.145 ], 00:27:47.145 "read-only": true 00:27:47.145 }, 00:27:47.145 { 00:27:47.145 "name": "cache_device", 00:27:47.145 "type": "bdev", 00:27:47.145 "chunks": [ 00:27:47.145 { 00:27:47.145 "id": 0, 00:27:47.145 "state": "OPEN", 00:27:47.145 "utilization": 0.0 00:27:47.145 }, 00:27:47.145 { 00:27:47.145 "id": 1, 00:27:47.145 "state": "OPEN", 00:27:47.145 "utilization": 0.0 00:27:47.145 }, 00:27:47.145 { 00:27:47.145 "id": 2, 00:27:47.145 "state": "FREE", 00:27:47.145 "utilization": 0.0 00:27:47.145 }, 00:27:47.145 { 00:27:47.145 "id": 3, 00:27:47.145 "state": "FREE", 00:27:47.145 "utilization": 0.0 00:27:47.145 } 00:27:47.145 ], 00:27:47.145 "read-only": true 00:27:47.145 }, 00:27:47.145 { 00:27:47.145 "name": "verbose_mode", 00:27:47.145 "value": true, 00:27:47.145 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:47.145 }, 00:27:47.145 { 00:27:47.145 "name": "prep_upgrade_on_shutdown", 00:27:47.145 "value": false, 00:27:47.145 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:47.145 } 00:27:47.145 ] 00:27:47.145 } 00:27:47.145 15:53:08 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:47.145 15:53:08 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:47.145 15:53:08 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:47.403 15:53:08 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:47.403 15:53:08 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:47.403 15:53:08 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:47.403 15:53:08 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:47.403 15:53:08 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:47.661 Validate MD5 checksum, iteration 1 00:27:47.661 15:53:09 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:47.661 15:53:09 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:47.661 15:53:09 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:47.661 15:53:09 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:47.661 15:53:09 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:47.661 15:53:09 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:47.661 15:53:09 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:47.661 15:53:09 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:47.661 15:53:09 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:47.661 15:53:09 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:47.661 15:53:09 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:47.662 15:53:09 -- ftl/common.sh@154 -- # return 0 00:27:47.662 15:53:09 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:47.920 [2024-07-24 15:53:09.295122] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:47.920 [2024-07-24 15:53:09.295264] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78807 ] 00:27:47.920 [2024-07-24 15:53:09.454639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.179 [2024-07-24 15:53:09.714493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.211  Copying: 466/1024 [MB] (466 MBps) Copying: 896/1024 [MB] (430 MBps) Copying: 1024/1024 [MB] (average 436 MBps) 00:27:53.211 00:27:53.211 15:53:14 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:53.211 15:53:14 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:55.755 15:53:16 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:55.755 15:53:16 -- ftl/upgrade_shutdown.sh@103 -- # sum=33b3104801e4d1f63a22012ec88bd2e8 00:27:55.755 Validate MD5 checksum, iteration 2 00:27:55.755 15:53:16 -- ftl/upgrade_shutdown.sh@105 -- # [[ 33b3104801e4d1f63a22012ec88bd2e8 != \3\3\b\3\1\0\4\8\0\1\e\4\d\1\f\6\3\a\2\2\0\1\2\e\c\8\8\b\d\2\e\8 ]] 00:27:55.755 15:53:16 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:55.755 15:53:16 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:55.755 15:53:16 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:55.755 15:53:16 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:55.755 15:53:16 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:55.755 15:53:16 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:55.755 15:53:16 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:55.755 15:53:16 -- ftl/common.sh@154 -- # return 0 00:27:55.755 15:53:16 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:55.755 [2024-07-24 15:53:16.835766] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:55.755 [2024-07-24 15:53:16.835912] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78884 ] 00:27:55.755 [2024-07-24 15:53:17.009574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.755 [2024-07-24 15:53:17.197642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.904  Copying: 502/1024 [MB] (502 MBps) Copying: 952/1024 [MB] (450 MBps) Copying: 1024/1024 [MB] (average 476 MBps) 00:28:01.904 00:28:01.904 15:53:23 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:01.904 15:53:23 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:03.813 15:53:25 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:03.813 15:53:25 -- ftl/upgrade_shutdown.sh@103 -- # sum=5a126dd73759ae916f97deeec9868d79 00:28:03.813 15:53:25 -- ftl/upgrade_shutdown.sh@105 -- # [[ 5a126dd73759ae916f97deeec9868d79 != \5\a\1\2\6\d\d\7\3\7\5\9\a\e\9\1\6\f\9\7\d\e\e\e\c\9\8\6\8\d\7\9 ]] 00:28:03.813 15:53:25 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:03.813 15:53:25 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:03.813 15:53:25 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:03.813 15:53:25 -- ftl/common.sh@137 -- # [[ -n 78755 ]] 00:28:03.813 15:53:25 -- ftl/common.sh@138 -- # kill -9 78755 00:28:03.813 15:53:25 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:03.813 15:53:25 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:03.813 15:53:25 -- ftl/common.sh@81 -- # local base_bdev= 00:28:03.813 15:53:25 -- ftl/common.sh@82 -- # local cache_bdev= 00:28:03.813 15:53:25 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:03.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.813 15:53:25 -- ftl/common.sh@89 -- # spdk_tgt_pid=78969 00:28:03.813 15:53:25 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:03.813 15:53:25 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:03.813 15:53:25 -- ftl/common.sh@91 -- # waitforlisten 78969 00:28:03.813 15:53:25 -- common/autotest_common.sh@819 -- # '[' -z 78969 ']' 00:28:03.813 15:53:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.813 15:53:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:03.813 15:53:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.813 15:53:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:03.813 15:53:25 -- common/autotest_common.sh@10 -- # set +x 00:28:04.071 [2024-07-24 15:53:25.438678] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:04.071 [2024-07-24 15:53:25.438886] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78969 ] 00:28:04.071 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 818: 78755 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:04.071 [2024-07-24 15:53:25.622365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.329 [2024-07-24 15:53:25.810694] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:04.329 [2024-07-24 15:53:25.810997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.263 [2024-07-24 15:53:26.631787] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:05.263 [2024-07-24 15:53:26.631885] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:05.263 [2024-07-24 15:53:26.774214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.263 [2024-07-24 15:53:26.774289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:05.263 [2024-07-24 15:53:26.774322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:05.263 [2024-07-24 15:53:26.774342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.263 [2024-07-24 15:53:26.774487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.263 [2024-07-24 15:53:26.774528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:05.263 [2024-07-24 15:53:26.774551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.093 ms 00:28:05.263 [2024-07-24 15:53:26.774579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.263 [2024-07-24 15:53:26.774640] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:05.263 [2024-07-24 15:53:26.775998] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:05.263 [2024-07-24 15:53:26.776048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.263 [2024-07-24 15:53:26.776079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:05.263 [2024-07-24 15:53:26.776119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.419 ms 00:28:05.263 [2024-07-24 15:53:26.776140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.263 [2024-07-24 15:53:26.776790] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:05.263 [2024-07-24 15:53:26.797686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.263 [2024-07-24 15:53:26.797787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:05.263 [2024-07-24 15:53:26.797821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 20.893 ms 00:28:05.263 [2024-07-24 15:53:26.797840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.263 [2024-07-24 15:53:26.810534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.263 [2024-07-24 15:53:26.810620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:05.263 [2024-07-24 15:53:26.810662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:28:05.263 [2024-07-24 15:53:26.810684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.263 [2024-07-24 15:53:26.811515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.263 [2024-07-24 15:53:26.811558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:05.263 [2024-07-24 15:53:26.811586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.636 ms 00:28:05.263 [2024-07-24 15:53:26.811607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.263 [2024-07-24 15:53:26.811687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.263 [2024-07-24 15:53:26.811715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:05.263 [2024-07-24 15:53:26.811737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:05.263 [2024-07-24 15:53:26.811757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.263 [2024-07-24 15:53:26.811831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.263 [2024-07-24 15:53:26.811858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:05.263 [2024-07-24 15:53:26.811880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:28:05.263 [2024-07-24 15:53:26.811899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.263 [2024-07-24 15:53:26.811965] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:05.263 [2024-07-24 15:53:26.816426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.263 [2024-07-24 15:53:26.816484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:05.263 [2024-07-24 15:53:26.816510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.481 ms 00:28:05.263 [2024-07-24 15:53:26.816529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.263 [2024-07-24 15:53:26.816621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.263 [2024-07-24 15:53:26.816648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:05.263 [2024-07-24 15:53:26.816672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:05.263 [2024-07-24 15:53:26.816691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.263 [2024-07-24 15:53:26.816783] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:05.263 [2024-07-24 15:53:26.816831] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:28:05.263 [2024-07-24 15:53:26.816894] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:05.263 [2024-07-24 15:53:26.816942] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:28:05.264 [2024-07-24 15:53:26.817080] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:28:05.264 [2024-07-24 15:53:26.817147] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:05.264 [2024-07-24 15:53:26.817183] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:28:05.264 [2024-07-24 15:53:26.817221] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:05.264 [2024-07-24 15:53:26.817245] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:05.264 [2024-07-24 15:53:26.817267] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:05.264 [2024-07-24 15:53:26.817285] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:05.264 [2024-07-24 15:53:26.817304] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:28:05.264 [2024-07-24 15:53:26.817323] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:28:05.264 [2024-07-24 15:53:26.817344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.264 [2024-07-24 15:53:26.817363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:05.264 [2024-07-24 15:53:26.817384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.566 ms 00:28:05.264 [2024-07-24 15:53:26.817403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.264 [2024-07-24 15:53:26.817526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.264 [2024-07-24 15:53:26.817570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:05.264 [2024-07-24 15:53:26.817594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:28:05.264 [2024-07-24 15:53:26.817613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.264 [2024-07-24 15:53:26.817748] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:05.264 [2024-07-24 15:53:26.817778] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:05.264 [2024-07-24 15:53:26.817801] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:05.264 [2024-07-24 15:53:26.817822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:05.264 [2024-07-24 15:53:26.817842] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:05.264 [2024-07-24 15:53:26.817861] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:05.264 [2024-07-24 15:53:26.817879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:05.264 [2024-07-24 15:53:26.817898] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:05.264 [2024-07-24 15:53:26.817918] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:05.264 [2024-07-24 15:53:26.817937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:05.264 [2024-07-24 15:53:26.817955] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:05.264 [2024-07-24 15:53:26.817975] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:05.264 [2024-07-24 15:53:26.817994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:05.264 [2024-07-24 15:53:26.818013] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:05.264 [2024-07-24 15:53:26.818031] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:28:05.264 [2024-07-24 15:53:26.818050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:05.264 [2024-07-24 15:53:26.818068] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:05.264 [2024-07-24 15:53:26.818103] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:28:05.264 [2024-07-24 15:53:26.818126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:05.264 [2024-07-24 15:53:26.818146] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:28:05.264 [2024-07-24 15:53:26.818164] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:28:05.264 [2024-07-24 15:53:26.818184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:28:05.264 [2024-07-24 15:53:26.818203] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:05.264 [2024-07-24 15:53:26.818223] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:05.264 [2024-07-24 15:53:26.818242] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:05.264 [2024-07-24 15:53:26.818260] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:05.264 [2024-07-24 15:53:26.818278] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:28:05.264 [2024-07-24 15:53:26.818296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:05.264 [2024-07-24 15:53:26.818314] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:05.264 [2024-07-24 15:53:26.818333] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:05.264 [2024-07-24 15:53:26.818352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:05.264 [2024-07-24 15:53:26.818371] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:05.264 [2024-07-24 15:53:26.818390] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:28:05.264 [2024-07-24 15:53:26.818409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:05.264 [2024-07-24 15:53:26.818430] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:05.264 [2024-07-24 15:53:26.818450] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:05.264 [2024-07-24 15:53:26.818468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:05.264 [2024-07-24 15:53:26.818486] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:05.264 [2024-07-24 15:53:26.818505] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:28:05.264 [2024-07-24 15:53:26.818524] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:05.264 [2024-07-24 15:53:26.818542] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:05.264 [2024-07-24 15:53:26.818561] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:05.264 [2024-07-24 15:53:26.818589] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:05.264 [2024-07-24 15:53:26.818610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:05.264 [2024-07-24 15:53:26.818630] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:05.264 [2024-07-24 15:53:26.818664] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:05.264 [2024-07-24 15:53:26.818687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:05.264 [2024-07-24 15:53:26.818707] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:05.264 [2024-07-24 15:53:26.818725] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:05.264 [2024-07-24 15:53:26.818745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:05.264 [2024-07-24 15:53:26.818765] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:05.264 [2024-07-24 15:53:26.818789] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:05.264 [2024-07-24 15:53:26.818812] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:05.264 [2024-07-24 15:53:26.818833] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:28:05.264 [2024-07-24 15:53:26.818852] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:28:05.264 [2024-07-24 15:53:26.818883] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:28:05.264 [2024-07-24 15:53:26.818902] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:28:05.264 [2024-07-24 15:53:26.818922] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:28:05.264 [2024-07-24 15:53:26.818942] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:28:05.264 [2024-07-24 15:53:26.818962] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:28:05.264 [2024-07-24 15:53:26.818999] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:28:05.264 [2024-07-24 15:53:26.819021] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:28:05.264 [2024-07-24 15:53:26.819045] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:28:05.264 [2024-07-24 15:53:26.819065] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:28:05.264 [2024-07-24 15:53:26.819102] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:28:05.264 [2024-07-24 15:53:26.819125] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:05.264 [2024-07-24 15:53:26.819147] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:05.264 [2024-07-24 15:53:26.819168] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:05.264 [2024-07-24 15:53:26.819189] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:05.264 [2024-07-24 15:53:26.819211] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:05.264 [2024-07-24 15:53:26.819231] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:05.264 [2024-07-24 15:53:26.819253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.264 [2024-07-24 15:53:26.819274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:05.264 [2024-07-24 15:53:26.819293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.561 ms 00:28:05.264 [2024-07-24 15:53:26.819313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.264 [2024-07-24 15:53:26.844545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.264 [2024-07-24 15:53:26.844635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:05.264 [2024-07-24 15:53:26.844672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.121 ms 00:28:05.264 [2024-07-24 15:53:26.844696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.264 [2024-07-24 15:53:26.844800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.264 [2024-07-24 15:53:26.844836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:05.265 [2024-07-24 15:53:26.844860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:05.265 [2024-07-24 15:53:26.844880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.523 [2024-07-24 15:53:26.884834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.523 [2024-07-24 15:53:26.884910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:05.523 [2024-07-24 15:53:26.884949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 39.814 ms 00:28:05.523 [2024-07-24 15:53:26.884970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.523 [2024-07-24 15:53:26.885080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.523 [2024-07-24 15:53:26.885133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:05.523 [2024-07-24 15:53:26.885165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:05.523 [2024-07-24 15:53:26.885182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.523 [2024-07-24 15:53:26.885413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.523 [2024-07-24 15:53:26.885451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:05.523 [2024-07-24 15:53:26.885476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.109 ms 00:28:05.523 [2024-07-24 15:53:26.885496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.523 [2024-07-24 15:53:26.885593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.523 [2024-07-24 15:53:26.885620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:05.523 [2024-07-24 15:53:26.885641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:28:05.523 [2024-07-24 15:53:26.885671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.523 [2024-07-24 15:53:26.904484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.523 [2024-07-24 15:53:26.904564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:05.523 [2024-07-24 15:53:26.904603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.757 ms 00:28:05.523 [2024-07-24 15:53:26.904632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.523 [2024-07-24 15:53:26.904902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.523 [2024-07-24 15:53:26.904934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:05.523 [2024-07-24 15:53:26.904959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:05.523 [2024-07-24 15:53:26.904979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.523 [2024-07-24 15:53:26.926201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.523 [2024-07-24 15:53:26.926280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:05.523 [2024-07-24 15:53:26.926348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 21.177 ms 00:28:05.523 [2024-07-24 15:53:26.926372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.523 [2024-07-24 15:53:26.939527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.523 [2024-07-24 15:53:26.939610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:05.523 [2024-07-24 15:53:26.939642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.357 ms 00:28:05.523 [2024-07-24 15:53:26.939661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.523 [2024-07-24 15:53:27.019616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.523 [2024-07-24 15:53:27.019708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:05.523 [2024-07-24 15:53:27.019740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 79.802 ms 00:28:05.523 [2024-07-24 15:53:27.019760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.523 [2024-07-24 15:53:27.019949] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:05.523 [2024-07-24 15:53:27.020026] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:05.523 [2024-07-24 15:53:27.020121] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:05.523 [2024-07-24 15:53:27.020190] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:05.523 [2024-07-24 15:53:27.020216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.523 [2024-07-24 15:53:27.020236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:05.523 [2024-07-24 15:53:27.020258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.338 ms 00:28:05.523 [2024-07-24 15:53:27.020289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.523 [2024-07-24 15:53:27.020445] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:05.523 [2024-07-24 15:53:27.020494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.523 [2024-07-24 15:53:27.020517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:05.523 [2024-07-24 15:53:27.020539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:28:05.523 [2024-07-24 15:53:27.020559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.523 [2024-07-24 15:53:27.041271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.523 [2024-07-24 15:53:27.041361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:05.523 [2024-07-24 15:53:27.041394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 20.655 ms 00:28:05.523 [2024-07-24 15:53:27.041414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.523 [2024-07-24 15:53:27.053950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.523 [2024-07-24 15:53:27.054028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:05.523 [2024-07-24 15:53:27.054060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:28:05.523 [2024-07-24 15:53:27.054079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.523 [2024-07-24 15:53:27.054241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.523 [2024-07-24 15:53:27.054271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:28:05.523 [2024-07-24 15:53:27.054304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:05.523 [2024-07-24 15:53:27.054323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.523 [2024-07-24 15:53:27.054514] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:28:06.089 [2024-07-24 15:53:27.505612] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:28:06.089 [2024-07-24 15:53:27.505822] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:28:06.656 [2024-07-24 15:53:27.975403] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:28:06.656 [2024-07-24 15:53:27.975546] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:06.656 [2024-07-24 15:53:27.975577] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:06.656 [2024-07-24 15:53:27.975601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.656 [2024-07-24 15:53:27.975622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:06.656 [2024-07-24 15:53:27.975647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 921.234 ms 00:28:06.656 [2024-07-24 15:53:27.975667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.656 [2024-07-24 15:53:27.975744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.656 [2024-07-24 15:53:27.975771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:06.657 [2024-07-24 15:53:27.975806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:06.657 [2024-07-24 15:53:27.975827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.657 [2024-07-24 15:53:27.988952] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:06.657 [2024-07-24 15:53:27.989298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.657 [2024-07-24 15:53:27.989339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:06.657 [2024-07-24 15:53:27.989370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.431 ms 00:28:06.657 [2024-07-24 15:53:27.989393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.657 [2024-07-24 15:53:27.990293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.657 [2024-07-24 15:53:27.990334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:28:06.657 [2024-07-24 15:53:27.990361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.682 ms 00:28:06.657 [2024-07-24 15:53:27.990391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.657 [2024-07-24 15:53:27.993000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.657 [2024-07-24 15:53:27.993048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:06.657 [2024-07-24 15:53:27.993075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.550 ms 00:28:06.657 [2024-07-24 15:53:27.993111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.657 [2024-07-24 15:53:28.026252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.657 [2024-07-24 15:53:28.026337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:28:06.657 [2024-07-24 15:53:28.026382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 33.073 ms 00:28:06.657 [2024-07-24 15:53:28.026403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.657 [2024-07-24 15:53:28.026737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.657 [2024-07-24 15:53:28.026770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:06.657 [2024-07-24 15:53:28.026794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:06.657 [2024-07-24 15:53:28.026815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.657 [2024-07-24 15:53:28.028909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.657 [2024-07-24 15:53:28.028959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:28:06.657 [2024-07-24 15:53:28.028986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.051 ms 00:28:06.657 [2024-07-24 15:53:28.029015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.657 [2024-07-24 15:53:28.029104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.657 [2024-07-24 15:53:28.029135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:06.657 [2024-07-24 15:53:28.029157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:06.657 [2024-07-24 15:53:28.029177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.657 [2024-07-24 15:53:28.029251] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:06.657 [2024-07-24 15:53:28.029289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.657 [2024-07-24 15:53:28.029309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:06.657 [2024-07-24 15:53:28.029330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:28:06.657 [2024-07-24 15:53:28.029349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.657 [2024-07-24 15:53:28.029456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.657 [2024-07-24 15:53:28.029484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:06.657 [2024-07-24 15:53:28.029505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:28:06.657 [2024-07-24 15:53:28.029525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.657 [2024-07-24 15:53:28.030890] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1256.172 ms, result 0 00:28:06.657 [2024-07-24 15:53:28.043392] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.657 [2024-07-24 15:53:28.059424] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:28:06.657 [2024-07-24 15:53:28.068742] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:07.602 Validate MD5 checksum, iteration 1 00:28:07.603 15:53:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:07.603 15:53:28 -- common/autotest_common.sh@852 -- # return 0 00:28:07.603 15:53:28 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:07.603 15:53:28 -- ftl/common.sh@95 -- # return 0 00:28:07.603 15:53:28 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:07.603 15:53:28 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:07.603 15:53:28 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:07.603 15:53:28 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:07.603 15:53:28 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:07.603 15:53:28 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:07.603 15:53:28 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:07.603 15:53:28 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:07.603 15:53:28 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:07.603 15:53:28 -- ftl/common.sh@154 -- # return 0 00:28:07.603 15:53:28 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:07.603 [2024-07-24 15:53:29.020207] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:07.603 [2024-07-24 15:53:29.020862] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79019 ] 00:28:07.603 [2024-07-24 15:53:29.198759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.861 [2024-07-24 15:53:29.386551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.220  Copying: 468/1024 [MB] (468 MBps) Copying: 896/1024 [MB] (428 MBps) Copying: 1024/1024 [MB] (average 452 MBps) 00:28:13.220 00:28:13.220 15:53:34 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:13.220 15:53:34 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:15.748 15:53:36 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:15.748 Validate MD5 checksum, iteration 2 00:28:15.748 15:53:36 -- ftl/upgrade_shutdown.sh@103 -- # sum=33b3104801e4d1f63a22012ec88bd2e8 00:28:15.748 15:53:36 -- ftl/upgrade_shutdown.sh@105 -- # [[ 33b3104801e4d1f63a22012ec88bd2e8 != \3\3\b\3\1\0\4\8\0\1\e\4\d\1\f\6\3\a\2\2\0\1\2\e\c\8\8\b\d\2\e\8 ]] 00:28:15.748 15:53:36 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:15.748 15:53:36 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:15.748 15:53:36 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:15.748 15:53:36 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:15.748 15:53:36 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:15.748 15:53:36 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:15.748 15:53:36 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:15.748 15:53:36 -- ftl/common.sh@154 -- # return 0 00:28:15.748 15:53:36 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:15.749 [2024-07-24 15:53:37.036329] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:15.749 [2024-07-24 15:53:37.036479] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79102 ] 00:28:15.749 [2024-07-24 15:53:37.202357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.006 [2024-07-24 15:53:37.397443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.001  Copying: 501/1024 [MB] (501 MBps) Copying: 888/1024 [MB] (387 MBps) Copying: 1024/1024 [MB] (average 429 MBps) 00:28:21.001 00:28:21.001 15:53:42 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:21.001 15:53:42 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:22.899 15:53:44 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:22.899 15:53:44 -- ftl/upgrade_shutdown.sh@103 -- # sum=5a126dd73759ae916f97deeec9868d79 00:28:22.899 15:53:44 -- ftl/upgrade_shutdown.sh@105 -- # [[ 5a126dd73759ae916f97deeec9868d79 != \5\a\1\2\6\d\d\7\3\7\5\9\a\e\9\1\6\f\9\7\d\e\e\e\c\9\8\6\8\d\7\9 ]] 00:28:22.899 15:53:44 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:22.899 15:53:44 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:22.899 15:53:44 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:22.899 15:53:44 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:22.899 15:53:44 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:22.899 15:53:44 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:23.157 15:53:44 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:23.157 15:53:44 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:23.157 15:53:44 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:23.157 15:53:44 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:23.157 15:53:44 -- ftl/common.sh@130 -- # [[ -n 78969 ]] 00:28:23.157 15:53:44 -- ftl/common.sh@131 -- # killprocess 78969 00:28:23.157 15:53:44 -- common/autotest_common.sh@926 -- # '[' -z 78969 ']' 00:28:23.157 15:53:44 -- common/autotest_common.sh@930 -- # kill -0 78969 00:28:23.157 15:53:44 -- common/autotest_common.sh@931 -- # uname 00:28:23.157 15:53:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:23.157 15:53:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78969 00:28:23.157 killing process with pid 78969 00:28:23.157 15:53:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:23.157 15:53:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:23.157 15:53:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78969' 00:28:23.157 15:53:44 -- common/autotest_common.sh@945 -- # kill 78969 00:28:23.157 15:53:44 -- common/autotest_common.sh@950 -- # wait 78969 00:28:24.089 [2024-07-24 15:53:45.582476] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:28:24.089 [2024-07-24 15:53:45.599573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.089 [2024-07-24 15:53:45.599650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:24.089 [2024-07-24 15:53:45.599671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:24.089 [2024-07-24 15:53:45.599690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.089 [2024-07-24 15:53:45.599725] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:24.089 [2024-07-24 15:53:45.603064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.089 [2024-07-24 15:53:45.603122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:24.089 [2024-07-24 15:53:45.603139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.317 ms 00:28:24.089 [2024-07-24 15:53:45.603151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.089 [2024-07-24 15:53:45.603459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.089 [2024-07-24 15:53:45.603496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:24.089 [2024-07-24 15:53:45.603512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.274 ms 00:28:24.089 [2024-07-24 15:53:45.603523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.089 [2024-07-24 15:53:45.604723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.089 [2024-07-24 15:53:45.604773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:24.089 [2024-07-24 15:53:45.604790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.175 ms 00:28:24.089 [2024-07-24 15:53:45.604802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.089 [2024-07-24 15:53:45.606060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.089 [2024-07-24 15:53:45.606099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:28:24.089 [2024-07-24 15:53:45.606115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.213 ms 00:28:24.089 [2024-07-24 15:53:45.606126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.089 [2024-07-24 15:53:45.618706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.089 [2024-07-24 15:53:45.618760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:24.089 [2024-07-24 15:53:45.618780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.527 ms 00:28:24.089 [2024-07-24 15:53:45.618793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.089 [2024-07-24 15:53:45.625335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.089 [2024-07-24 15:53:45.625381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:24.089 [2024-07-24 15:53:45.625398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.493 ms 00:28:24.089 [2024-07-24 15:53:45.625410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.089 [2024-07-24 15:53:45.625506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.089 [2024-07-24 15:53:45.625533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:24.089 [2024-07-24 15:53:45.625547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:28:24.089 [2024-07-24 15:53:45.625558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.089 [2024-07-24 15:53:45.639092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.089 [2024-07-24 15:53:45.639147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:28:24.089 [2024-07-24 15:53:45.639165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.502 ms 00:28:24.089 [2024-07-24 15:53:45.639176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.089 [2024-07-24 15:53:45.651679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.089 [2024-07-24 15:53:45.651722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:28:24.089 [2024-07-24 15:53:45.651738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.416 ms 00:28:24.089 [2024-07-24 15:53:45.651749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.089 [2024-07-24 15:53:45.664170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.089 [2024-07-24 15:53:45.664243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:24.089 [2024-07-24 15:53:45.664263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.372 ms 00:28:24.089 [2024-07-24 15:53:45.664275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.089 [2024-07-24 15:53:45.676650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.089 [2024-07-24 15:53:45.676705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:24.089 [2024-07-24 15:53:45.676723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.283 ms 00:28:24.089 [2024-07-24 15:53:45.676735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.089 [2024-07-24 15:53:45.676783] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:24.089 [2024-07-24 15:53:45.676808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:24.089 [2024-07-24 15:53:45.676822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:24.089 [2024-07-24 15:53:45.676835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:24.089 [2024-07-24 15:53:45.676847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:24.089 [2024-07-24 15:53:45.676859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:24.089 [2024-07-24 15:53:45.676870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:24.089 [2024-07-24 15:53:45.676882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:24.089 [2024-07-24 15:53:45.676894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:24.089 [2024-07-24 15:53:45.676906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:24.089 [2024-07-24 15:53:45.676918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:24.089 [2024-07-24 15:53:45.676929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:24.089 [2024-07-24 15:53:45.676941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:24.089 [2024-07-24 15:53:45.676952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:24.089 [2024-07-24 15:53:45.676964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:24.089 [2024-07-24 15:53:45.676976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:24.089 [2024-07-24 15:53:45.676987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:24.089 [2024-07-24 15:53:45.676999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:24.090 [2024-07-24 15:53:45.677010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:24.090 [2024-07-24 15:53:45.677025] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:24.090 [2024-07-24 15:53:45.677055] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 6745893d-90a2-4d28-96e3-df48f8e2d21c 00:28:24.090 [2024-07-24 15:53:45.677067] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:24.090 [2024-07-24 15:53:45.677078] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:24.090 [2024-07-24 15:53:45.677113] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:24.090 [2024-07-24 15:53:45.677133] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:24.090 [2024-07-24 15:53:45.677144] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:24.090 [2024-07-24 15:53:45.677155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:24.090 [2024-07-24 15:53:45.677167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:24.090 [2024-07-24 15:53:45.677177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:24.090 [2024-07-24 15:53:45.677187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:24.090 [2024-07-24 15:53:45.677199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.090 [2024-07-24 15:53:45.677211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:24.090 [2024-07-24 15:53:45.677223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.418 ms 00:28:24.090 [2024-07-24 15:53:45.677236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.350 [2024-07-24 15:53:45.694013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.350 [2024-07-24 15:53:45.694073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:24.350 [2024-07-24 15:53:45.694106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.745 ms 00:28:24.350 [2024-07-24 15:53:45.694120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.350 [2024-07-24 15:53:45.694376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.350 [2024-07-24 15:53:45.694402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:24.350 [2024-07-24 15:53:45.694415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.209 ms 00:28:24.350 [2024-07-24 15:53:45.694426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.350 [2024-07-24 15:53:45.753843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.350 [2024-07-24 15:53:45.754094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:24.350 [2024-07-24 15:53:45.754226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.350 [2024-07-24 15:53:45.754280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.350 [2024-07-24 15:53:45.754465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.350 [2024-07-24 15:53:45.754512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:24.350 [2024-07-24 15:53:45.754550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.350 [2024-07-24 15:53:45.754586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.350 [2024-07-24 15:53:45.754832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.350 [2024-07-24 15:53:45.754889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:24.350 [2024-07-24 15:53:45.755041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.350 [2024-07-24 15:53:45.755105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.350 [2024-07-24 15:53:45.755207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.350 [2024-07-24 15:53:45.755252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:24.350 [2024-07-24 15:53:45.755289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.350 [2024-07-24 15:53:45.755327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.350 [2024-07-24 15:53:45.860223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.350 [2024-07-24 15:53:45.860536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:24.350 [2024-07-24 15:53:45.860569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.350 [2024-07-24 15:53:45.860583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.350 [2024-07-24 15:53:45.900813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.350 [2024-07-24 15:53:45.900899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:24.350 [2024-07-24 15:53:45.900920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.350 [2024-07-24 15:53:45.900932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.350 [2024-07-24 15:53:45.901045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.350 [2024-07-24 15:53:45.901065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:24.350 [2024-07-24 15:53:45.901077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.350 [2024-07-24 15:53:45.901136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.350 [2024-07-24 15:53:45.901208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.350 [2024-07-24 15:53:45.901225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:24.350 [2024-07-24 15:53:45.901238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.350 [2024-07-24 15:53:45.901249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.350 [2024-07-24 15:53:45.901375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.350 [2024-07-24 15:53:45.901394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:24.350 [2024-07-24 15:53:45.901407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.350 [2024-07-24 15:53:45.901419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.350 [2024-07-24 15:53:45.901477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.350 [2024-07-24 15:53:45.901494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:24.350 [2024-07-24 15:53:45.901506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.350 [2024-07-24 15:53:45.901518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.350 [2024-07-24 15:53:45.901564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.350 [2024-07-24 15:53:45.901579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:24.350 [2024-07-24 15:53:45.901591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.350 [2024-07-24 15:53:45.901602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.350 [2024-07-24 15:53:45.901665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.350 [2024-07-24 15:53:45.901681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:24.350 [2024-07-24 15:53:45.901693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.350 [2024-07-24 15:53:45.901704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.350 [2024-07-24 15:53:45.901855] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 302.244 ms, result 0 00:28:25.724 15:53:47 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:25.724 15:53:47 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:25.724 15:53:47 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:25.724 15:53:47 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:25.724 15:53:47 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:25.725 15:53:47 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:25.725 Remove shared memory files 00:28:25.725 15:53:47 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:25.725 15:53:47 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:25.725 15:53:47 -- ftl/common.sh@205 -- # rm -f rm -f 00:28:25.725 15:53:47 -- ftl/common.sh@206 -- # rm -f rm -f 00:28:25.725 15:53:47 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid78755 00:28:25.725 15:53:47 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:25.725 15:53:47 -- ftl/common.sh@209 -- # rm -f rm -f 00:28:25.725 ************************************ 00:28:25.725 END TEST ftl_upgrade_shutdown 00:28:25.725 ************************************ 00:28:25.725 00:28:25.725 real 1m36.407s 00:28:25.725 user 2m20.580s 00:28:25.725 sys 0m23.524s 00:28:25.725 15:53:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:25.725 15:53:47 -- common/autotest_common.sh@10 -- # set +x 00:28:25.725 15:53:47 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:28:25.725 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:28:25.725 15:53:47 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:28:25.725 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:28:25.725 15:53:47 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:25.725 Process with pid 71379 is not found 00:28:25.725 15:53:47 -- ftl/ftl.sh@14 -- # killprocess 71379 00:28:25.725 15:53:47 -- common/autotest_common.sh@926 -- # '[' -z 71379 ']' 00:28:25.725 15:53:47 -- common/autotest_common.sh@930 -- # kill -0 71379 00:28:25.725 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (71379) - No such process 00:28:25.725 15:53:47 -- common/autotest_common.sh@953 -- # echo 'Process with pid 71379 is not found' 00:28:25.725 15:53:47 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:28:25.725 15:53:47 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79236 00:28:25.725 15:53:47 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:25.725 15:53:47 -- ftl/ftl.sh@20 -- # waitforlisten 79236 00:28:25.725 15:53:47 -- common/autotest_common.sh@819 -- # '[' -z 79236 ']' 00:28:25.725 15:53:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.725 15:53:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:25.725 15:53:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.725 15:53:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:25.725 15:53:47 -- common/autotest_common.sh@10 -- # set +x 00:28:25.725 [2024-07-24 15:53:47.194252] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:25.725 [2024-07-24 15:53:47.194675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79236 ] 00:28:25.983 [2024-07-24 15:53:47.363518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.241 [2024-07-24 15:53:47.589724] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:26.241 [2024-07-24 15:53:47.590316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.617 15:53:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:27.617 15:53:48 -- common/autotest_common.sh@852 -- # return 0 00:28:27.617 15:53:48 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:28:27.874 nvme0n1 00:28:27.874 15:53:49 -- ftl/ftl.sh@22 -- # clear_lvols 00:28:27.874 15:53:49 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:27.874 15:53:49 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:28.132 15:53:49 -- ftl/common.sh@28 -- # stores=0a5aaa92-7755-47d5-96e5-4e7831cc579a 00:28:28.132 15:53:49 -- ftl/common.sh@29 -- # for lvs in $stores 00:28:28.132 15:53:49 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0a5aaa92-7755-47d5-96e5-4e7831cc579a 00:28:28.390 15:53:49 -- ftl/ftl.sh@23 -- # killprocess 79236 00:28:28.390 15:53:49 -- common/autotest_common.sh@926 -- # '[' -z 79236 ']' 00:28:28.390 15:53:49 -- common/autotest_common.sh@930 -- # kill -0 79236 00:28:28.390 15:53:49 -- common/autotest_common.sh@931 -- # uname 00:28:28.390 15:53:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:28.390 15:53:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79236 00:28:28.390 killing process with pid 79236 00:28:28.390 15:53:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:28.390 15:53:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:28.390 15:53:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79236' 00:28:28.390 15:53:49 -- common/autotest_common.sh@945 -- # kill 79236 00:28:28.390 15:53:49 -- common/autotest_common.sh@950 -- # wait 79236 00:28:30.915 15:53:52 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:30.915 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:30.915 Waiting for block devices as requested 00:28:30.915 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:28:30.915 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:28:30.915 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:31.173 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:28:36.451 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:28:36.451 15:53:57 -- ftl/ftl.sh@28 -- # remove_shm 00:28:36.451 Remove shared memory files 00:28:36.451 15:53:57 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:36.451 15:53:57 -- ftl/common.sh@205 -- # rm -f rm -f 00:28:36.451 15:53:57 -- ftl/common.sh@206 -- # rm -f rm -f 00:28:36.451 15:53:57 -- ftl/common.sh@207 -- # rm -f rm -f 00:28:36.451 15:53:57 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:36.451 15:53:57 -- ftl/common.sh@209 -- # rm -f rm -f 00:28:36.451 00:28:36.451 real 11m41.428s 00:28:36.451 user 14m47.171s 00:28:36.451 sys 1m32.248s 00:28:36.451 15:53:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:36.451 15:53:57 -- common/autotest_common.sh@10 -- # set +x 00:28:36.451 ************************************ 00:28:36.451 END TEST ftl 00:28:36.451 ************************************ 00:28:36.451 15:53:57 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:36.451 15:53:57 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:28:36.451 15:53:57 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:36.451 15:53:57 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:28:36.451 15:53:57 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:36.451 15:53:57 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:36.451 15:53:57 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:28:36.451 15:53:57 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:28:36.451 15:53:57 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:28:36.451 15:53:57 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:28:36.451 15:53:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:36.451 15:53:57 -- common/autotest_common.sh@10 -- # set +x 00:28:36.452 15:53:57 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:28:36.452 15:53:57 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:28:36.452 15:53:57 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:28:36.452 15:53:57 -- common/autotest_common.sh@10 -- # set +x 00:28:37.404 INFO: APP EXITING 00:28:37.404 INFO: killing all VMs 00:28:37.404 INFO: killing vhost app 00:28:37.404 INFO: EXIT DONE 00:28:38.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:38.339 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:28:38.339 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:28:38.339 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:28:38.339 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:28:38.905 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:39.164 Cleaning 00:28:39.164 Removing: /var/run/dpdk/spdk0/config 00:28:39.164 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:39.164 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:39.164 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:39.164 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:39.164 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:39.164 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:39.164 Removing: /var/run/dpdk/spdk0 00:28:39.164 Removing: /var/run/dpdk/spdk_pid56222 00:28:39.164 Removing: /var/run/dpdk/spdk_pid56437 00:28:39.164 Removing: /var/run/dpdk/spdk_pid56731 00:28:39.164 Removing: /var/run/dpdk/spdk_pid56835 00:28:39.164 Removing: /var/run/dpdk/spdk_pid56935 00:28:39.164 Removing: /var/run/dpdk/spdk_pid57050 00:28:39.164 Removing: /var/run/dpdk/spdk_pid57151 00:28:39.164 Removing: /var/run/dpdk/spdk_pid57196 00:28:39.164 Removing: /var/run/dpdk/spdk_pid57227 00:28:39.164 Removing: /var/run/dpdk/spdk_pid57294 00:28:39.164 Removing: /var/run/dpdk/spdk_pid57400 00:28:39.164 Removing: /var/run/dpdk/spdk_pid57850 00:28:39.164 Removing: /var/run/dpdk/spdk_pid57924 00:28:39.164 Removing: /var/run/dpdk/spdk_pid58003 00:28:39.164 Removing: /var/run/dpdk/spdk_pid58032 00:28:39.164 Removing: /var/run/dpdk/spdk_pid58166 00:28:39.164 Removing: /var/run/dpdk/spdk_pid58189 00:28:39.164 Removing: /var/run/dpdk/spdk_pid58328 00:28:39.164 Removing: /var/run/dpdk/spdk_pid58356 00:28:39.164 Removing: /var/run/dpdk/spdk_pid58421 00:28:39.164 Removing: /var/run/dpdk/spdk_pid58447 00:28:39.164 Removing: /var/run/dpdk/spdk_pid58511 00:28:39.164 Removing: /var/run/dpdk/spdk_pid58543 00:28:39.164 Removing: /var/run/dpdk/spdk_pid58720 00:28:39.164 Removing: /var/run/dpdk/spdk_pid58757 00:28:39.164 Removing: /var/run/dpdk/spdk_pid58837 00:28:39.164 Removing: /var/run/dpdk/spdk_pid58920 00:28:39.164 Removing: /var/run/dpdk/spdk_pid58951 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59029 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59055 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59101 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59134 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59175 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59207 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59253 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59279 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59326 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59352 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59398 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59430 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59471 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59497 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59549 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59575 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59616 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59652 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59694 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59726 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59767 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59798 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59845 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59872 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59924 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59950 00:28:39.164 Removing: /var/run/dpdk/spdk_pid59991 00:28:39.164 Removing: /var/run/dpdk/spdk_pid60023 00:28:39.164 Removing: /var/run/dpdk/spdk_pid60069 00:28:39.164 Removing: /var/run/dpdk/spdk_pid60095 00:28:39.164 Removing: /var/run/dpdk/spdk_pid60142 00:28:39.164 Removing: /var/run/dpdk/spdk_pid60168 00:28:39.164 Removing: /var/run/dpdk/spdk_pid60220 00:28:39.164 Removing: /var/run/dpdk/spdk_pid60249 00:28:39.164 Removing: /var/run/dpdk/spdk_pid60293 00:28:39.164 Removing: /var/run/dpdk/spdk_pid60333 00:28:39.164 Removing: /var/run/dpdk/spdk_pid60377 00:28:39.164 Removing: /var/run/dpdk/spdk_pid60409 00:28:39.164 Removing: /var/run/dpdk/spdk_pid60455 00:28:39.164 Removing: /var/run/dpdk/spdk_pid60487 00:28:39.423 Removing: /var/run/dpdk/spdk_pid60529 00:28:39.423 Removing: /var/run/dpdk/spdk_pid60610 00:28:39.423 Removing: /var/run/dpdk/spdk_pid60725 00:28:39.423 Removing: /var/run/dpdk/spdk_pid60892 00:28:39.423 Removing: /var/run/dpdk/spdk_pid60995 00:28:39.423 Removing: /var/run/dpdk/spdk_pid61037 00:28:39.423 Removing: /var/run/dpdk/spdk_pid61504 00:28:39.423 Removing: /var/run/dpdk/spdk_pid61681 00:28:39.423 Removing: /var/run/dpdk/spdk_pid61785 00:28:39.423 Removing: /var/run/dpdk/spdk_pid61844 00:28:39.423 Removing: /var/run/dpdk/spdk_pid61875 00:28:39.423 Removing: /var/run/dpdk/spdk_pid61950 00:28:39.423 Removing: /var/run/dpdk/spdk_pid62616 00:28:39.423 Removing: /var/run/dpdk/spdk_pid62659 00:28:39.423 Removing: /var/run/dpdk/spdk_pid63179 00:28:39.423 Removing: /var/run/dpdk/spdk_pid63288 00:28:39.423 Removing: /var/run/dpdk/spdk_pid63396 00:28:39.423 Removing: /var/run/dpdk/spdk_pid63451 00:28:39.423 Removing: /var/run/dpdk/spdk_pid63476 00:28:39.423 Removing: /var/run/dpdk/spdk_pid63507 00:28:39.423 Removing: /var/run/dpdk/spdk_pid65446 00:28:39.423 Removing: /var/run/dpdk/spdk_pid65596 00:28:39.423 Removing: /var/run/dpdk/spdk_pid65600 00:28:39.423 Removing: /var/run/dpdk/spdk_pid65618 00:28:39.423 Removing: /var/run/dpdk/spdk_pid65657 00:28:39.423 Removing: /var/run/dpdk/spdk_pid65661 00:28:39.423 Removing: /var/run/dpdk/spdk_pid65678 00:28:39.423 Removing: /var/run/dpdk/spdk_pid65723 00:28:39.423 Removing: /var/run/dpdk/spdk_pid65727 00:28:39.423 Removing: /var/run/dpdk/spdk_pid65739 00:28:39.423 Removing: /var/run/dpdk/spdk_pid65784 00:28:39.423 Removing: /var/run/dpdk/spdk_pid65788 00:28:39.423 Removing: /var/run/dpdk/spdk_pid65800 00:28:39.423 Removing: /var/run/dpdk/spdk_pid67200 00:28:39.423 Removing: /var/run/dpdk/spdk_pid67301 00:28:39.423 Removing: /var/run/dpdk/spdk_pid67443 00:28:39.423 Removing: /var/run/dpdk/spdk_pid67574 00:28:39.423 Removing: /var/run/dpdk/spdk_pid67689 00:28:39.423 Removing: /var/run/dpdk/spdk_pid67815 00:28:39.423 Removing: /var/run/dpdk/spdk_pid67953 00:28:39.423 Removing: /var/run/dpdk/spdk_pid68033 00:28:39.423 Removing: /var/run/dpdk/spdk_pid68178 00:28:39.423 Removing: /var/run/dpdk/spdk_pid68574 00:28:39.423 Removing: /var/run/dpdk/spdk_pid68616 00:28:39.423 Removing: /var/run/dpdk/spdk_pid69089 00:28:39.423 Removing: /var/run/dpdk/spdk_pid69272 00:28:39.423 Removing: /var/run/dpdk/spdk_pid69381 00:28:39.423 Removing: /var/run/dpdk/spdk_pid69494 00:28:39.423 Removing: /var/run/dpdk/spdk_pid69553 00:28:39.423 Removing: /var/run/dpdk/spdk_pid69580 00:28:39.423 Removing: /var/run/dpdk/spdk_pid69879 00:28:39.423 Removing: /var/run/dpdk/spdk_pid69948 00:28:39.423 Removing: /var/run/dpdk/spdk_pid70029 00:28:39.423 Removing: /var/run/dpdk/spdk_pid70426 00:28:39.423 Removing: /var/run/dpdk/spdk_pid70581 00:28:39.423 Removing: /var/run/dpdk/spdk_pid71379 00:28:39.423 Removing: /var/run/dpdk/spdk_pid71513 00:28:39.423 Removing: /var/run/dpdk/spdk_pid71741 00:28:39.423 Removing: /var/run/dpdk/spdk_pid71838 00:28:39.423 Removing: /var/run/dpdk/spdk_pid72209 00:28:39.423 Removing: /var/run/dpdk/spdk_pid72478 00:28:39.423 Removing: /var/run/dpdk/spdk_pid72841 00:28:39.423 Removing: /var/run/dpdk/spdk_pid73061 00:28:39.423 Removing: /var/run/dpdk/spdk_pid73191 00:28:39.423 Removing: /var/run/dpdk/spdk_pid73267 00:28:39.423 Removing: /var/run/dpdk/spdk_pid73412 00:28:39.423 Removing: /var/run/dpdk/spdk_pid73448 00:28:39.423 Removing: /var/run/dpdk/spdk_pid73515 00:28:39.423 Removing: /var/run/dpdk/spdk_pid73722 00:28:39.423 Removing: /var/run/dpdk/spdk_pid74011 00:28:39.423 Removing: /var/run/dpdk/spdk_pid74397 00:28:39.423 Removing: /var/run/dpdk/spdk_pid74852 00:28:39.423 Removing: /var/run/dpdk/spdk_pid75274 00:28:39.423 Removing: /var/run/dpdk/spdk_pid75782 00:28:39.423 Removing: /var/run/dpdk/spdk_pid75930 00:28:39.423 Removing: /var/run/dpdk/spdk_pid76034 00:28:39.423 Removing: /var/run/dpdk/spdk_pid76682 00:28:39.423 Removing: /var/run/dpdk/spdk_pid76767 00:28:39.423 Removing: /var/run/dpdk/spdk_pid77221 00:28:39.423 Removing: /var/run/dpdk/spdk_pid77633 00:28:39.423 Removing: /var/run/dpdk/spdk_pid78123 00:28:39.423 Removing: /var/run/dpdk/spdk_pid78243 00:28:39.423 Removing: /var/run/dpdk/spdk_pid78308 00:28:39.423 Removing: /var/run/dpdk/spdk_pid78378 00:28:39.423 Removing: /var/run/dpdk/spdk_pid78447 00:28:39.423 Removing: /var/run/dpdk/spdk_pid78523 00:28:39.423 Removing: /var/run/dpdk/spdk_pid78755 00:28:39.423 Removing: /var/run/dpdk/spdk_pid78807 00:28:39.423 Removing: /var/run/dpdk/spdk_pid78884 00:28:39.423 Removing: /var/run/dpdk/spdk_pid78969 00:28:39.423 Removing: /var/run/dpdk/spdk_pid79019 00:28:39.423 Removing: /var/run/dpdk/spdk_pid79102 00:28:39.423 Removing: /var/run/dpdk/spdk_pid79236 00:28:39.681 Clean 00:28:39.681 killing process with pid 48331 00:28:39.681 killing process with pid 48332 00:28:39.681 15:54:01 -- common/autotest_common.sh@1436 -- # return 0 00:28:39.681 15:54:01 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:28:39.681 15:54:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:39.681 15:54:01 -- common/autotest_common.sh@10 -- # set +x 00:28:39.681 15:54:01 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:28:39.681 15:54:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:39.681 15:54:01 -- common/autotest_common.sh@10 -- # set +x 00:28:39.681 15:54:01 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:39.681 15:54:01 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:39.681 15:54:01 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:39.681 15:54:01 -- spdk/autotest.sh@394 -- # hash lcov 00:28:39.681 15:54:01 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:39.681 15:54:01 -- spdk/autotest.sh@396 -- # hostname 00:28:39.681 15:54:01 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:39.945 geninfo: WARNING: invalid characters removed from testname! 00:29:06.530 15:54:28 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:10.710 15:54:31 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:13.240 15:54:34 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:15.794 15:54:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:18.326 15:54:39 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:21.643 15:54:42 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:24.173 15:54:45 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:24.173 15:54:45 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:24.173 15:54:45 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:24.173 15:54:45 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.173 15:54:45 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.173 15:54:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.173 15:54:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.173 15:54:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.173 15:54:45 -- paths/export.sh@5 -- $ export PATH 00:29:24.173 15:54:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.173 15:54:45 -- common/autobuild_common.sh@437 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:24.173 15:54:45 -- common/autobuild_common.sh@438 -- $ date +%s 00:29:24.173 15:54:45 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721836485.XXXXXX 00:29:24.173 15:54:45 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721836485.dajUbw 00:29:24.173 15:54:45 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:29:24.173 15:54:45 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:29:24.173 15:54:45 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:29:24.173 15:54:45 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:24.173 15:54:45 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:24.173 15:54:45 -- common/autobuild_common.sh@454 -- $ get_config_params 00:29:24.173 15:54:45 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:29:24.173 15:54:45 -- common/autotest_common.sh@10 -- $ set +x 00:29:24.173 15:54:45 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:29:24.173 15:54:45 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:29:24.173 15:54:45 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:29:24.173 15:54:45 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:24.173 15:54:45 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:29:24.173 15:54:45 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:24.173 15:54:45 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:24.173 15:54:45 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:24.173 15:54:45 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:24.173 15:54:45 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:24.173 15:54:45 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:24.173 + [[ -n 5150 ]] 00:29:24.173 + sudo kill 5150 00:29:24.182 [Pipeline] } 00:29:24.198 [Pipeline] // timeout 00:29:24.201 [Pipeline] } 00:29:24.216 [Pipeline] // stage 00:29:24.221 [Pipeline] } 00:29:24.236 [Pipeline] // catchError 00:29:24.244 [Pipeline] stage 00:29:24.245 [Pipeline] { (Stop VM) 00:29:24.258 [Pipeline] sh 00:29:24.552 + vagrant halt 00:29:28.761 ==> default: Halting domain... 00:29:34.039 [Pipeline] sh 00:29:34.319 + vagrant destroy -f 00:29:38.544 ==> default: Removing domain... 00:29:38.556 [Pipeline] sh 00:29:38.833 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:29:38.841 [Pipeline] } 00:29:38.854 [Pipeline] // stage 00:29:38.859 [Pipeline] } 00:29:38.872 [Pipeline] // dir 00:29:38.875 [Pipeline] } 00:29:38.888 [Pipeline] // wrap 00:29:38.893 [Pipeline] } 00:29:38.908 [Pipeline] // catchError 00:29:38.915 [Pipeline] stage 00:29:38.916 [Pipeline] { (Epilogue) 00:29:38.929 [Pipeline] sh 00:29:39.209 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:45.772 [Pipeline] catchError 00:29:45.773 [Pipeline] { 00:29:45.784 [Pipeline] sh 00:29:46.058 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:46.317 Artifacts sizes are good 00:29:46.326 [Pipeline] } 00:29:46.342 [Pipeline] // catchError 00:29:46.350 [Pipeline] archiveArtifacts 00:29:46.354 Archiving artifacts 00:29:46.504 [Pipeline] cleanWs 00:29:46.516 [WS-CLEANUP] Deleting project workspace... 00:29:46.516 [WS-CLEANUP] Deferred wipeout is used... 00:29:46.522 [WS-CLEANUP] done 00:29:46.524 [Pipeline] } 00:29:46.539 [Pipeline] // stage 00:29:46.542 [Pipeline] } 00:29:46.553 [Pipeline] // node 00:29:46.556 [Pipeline] End of Pipeline 00:29:46.580 Finished: SUCCESS